DOZENS OF ADVOCATES—including district attorneys, MIT researchers, educators, and tech business leaders—sounded the alarm this week on an unprecedented threat to our civil liberties. We testified before state lawmakers, and told them the urgent truth: Massachusetts must press pause on government use of face surveillance technology.
Face surveillance evokes scenes from books and movies set in dystopian futures—but this technology is already being deployed in sports stadiums, retail stores, and even schools across the country. Here in Massachusetts, the technology is currently deployed and aggressively marketed without any oversight or regulations.
For example, the Massachusetts Registry of Motor Vehicles (RMV) obtained a facial recognition system in 2006, and has since made its driver’s license database available to federal, state, and local law enforcement agencies for face surveillance searches. These searches take place hundreds of times a year without legislative authorization, external oversight, public disclosure, or even a required showing of probable cause or reasonable suspicion of criminal activity. That means millions of people—anyone who has applied for a state-issued ID—are in a perpetual police lineup where everyone is a suspect.
A review of materials the ACLU obtained via public records litigation suggests RMV officials have never—not once—declined a police request to perform one of these facial recognition searches against the RMV’s database. The RMV also confirmed that it has never once audited the system to see if fraud, misuse, or abuse has taken place. In other words, the state is using powerful surveillance technology, but no one is minding the store. Making matters worse, criminal defendants are not notified that face surveillance was used to identify them, denying them a fair trial and shielding the technology from scrutiny in the courts.
Meanwhile, start-ups like Cambridge-based Suspect Technologies have been pushing their wares on our local police departments—and at the same time admitting that their systems may work only a fraction of the time. In fact, a recent ACLU of Massachusetts test showed that Amazon’s facial recognition product falsely matched 27 New England professional athletes to faces in a mugshot database. Using the program’s default settings, nearly one-in-six famous faces were mistakenly matched with images in the arrest photo database. Elsewhere, police in the UK tested face surveillance systems “in the wild,” and the systems reported error rates of 91 and 98 percent.
Face surveillance technology makes mistakes and, absent oversight, can cause serious harm. A Colorado man was permanently physically injured and lost his house, his children, and his career after police falsely accused him of bank robbery on the basis of a faulty face recognition search. Homeless, unemployed, and suffering from permanent injury due to his violent arrest, he later told a reporter unregulated face recognition technology in the hands of law enforcement ruined his life. Closer to home, a Brown University student woke up this spring to texts and calls from family members in her home country of Sri Lanka, warning her that the government was calling her a terrorist. A face recognition system had mistakenly identified her as one of the Easter Bombing culprits. While police eventually corrected their mistake, it was too late: she and her family received death threats.
While anyone can be a victim of a false match, the technology poses particularly serious threats of misidentification to women and people of color. According to research by MIT scientists, even face surveillance algorithms sold by the most prominent technology companies can exhibit troubling racial and gender bias, misclassifying darker-skinned women up to 35 percent of the time.
Face surveillance is dangerous when it doesn’t work, and when it does. The technology poses unprecedented threats to core civil rights and civil liberties, and to our free, open, democratic society. Face surveillance gives the government unprecedented power to track who we are, where we go, what we do, and who we know. It threatens to create a world where people are watched and identified as they attend a protest, congregate at a place of worship, visit a medical provider, and go about their daily lives.
Massachusetts State Police emails suggest the agency may already be using face surveillance to identify people expressing their First Amendment rights at political demonstrations. That’s precisely the type of abuse that flourishes in the dark. Spying on political protesters is grotesque, but other, more mundane kinds of abuse are no less chilling. A handwritten search log disclosed by the RMV to the ACLU indicated shockingly poor record-keeping practices, raising major questions about who is accessing this extremely sensitive information and why. Without intervention from the legislature, we may never know who is scanning our faces, and we may never be able to hold them accountable.
As a nationwide leader in both technology and liberty, Massachusetts should show the way and ensure that our rights keep pace with advancing technologies. It’s time for Massachusetts to press pause on face surveillance, and pass a statewide moratorium on government use of the technology until safeguards are in place.
Carol Rose is the executive director of the ACLU of Massachusetts.