Clearview AI uses your online photos to instantly ID you. That’s a problem, lawsuit says
Clearview AI has amassed a database of more than 3 billion photos of individuals by scraping sites such as Facebook, Twitter, Google and Venmo. It’s bigger than any other known facial-recognition database in the U.S., including the FBI’s. The New York company uses algorithms to map the pictures it stockpiles, determining, for example, the distance between an individual’s eyes to construct a “faceprint.â€
This technology appeals to law enforcement agencies across the country, which can use it in real time to help determine people’s identities.
It also has caught the attention of civil liberties advocates and activists, who allege in a lawsuit filed Tuesday that the company’s automatic scraping of their images and its extraction of their unique biometric information violate privacy and chill protected political speech and activity.
The plaintiffs — four individual civil liberties activists and the groups Mijente and NorCal Resist — allege Clearview AI “engages in the widespread collection of California residents’ images and biometric information without notice or consent.â€
This is especially consequential, the plaintiffs argue, for proponents of immigration or police reform, whose political speech may be critical of law enforcement and who may be members of communities that have been historically over-policed and targeted by surveillance tactics.
Clearview AI enhances law enforcement agencies’ efforts to monitor these activists, as well as immigrants, people of color and those perceived as “dissidents,†such as Black Lives Matter activists, and can potentially discourage their engagement in protected political speech as a result, the plaintiffs say.
The lawsuit, filed in Alameda County Superior Court, is part of a growing effort to restrict the use of facial-recognition technology. Bay Area cities — including San Francisco, Oakland, Berkeley and Alameda — have led that charge and were among the first in the U.S. to limit the use of facial recognition by local law enforcement in 2019.
Yet the push comes at a time when consumer expectations of privacy are low, as many have come to see the use and sale of personal information by companies such as Google and Facebook as an inevitability of the digital age.
Unlike other uses of personal information, facial recognition poses a unique danger, said Steven Renderos, executive director of MediaJustice and one of the individual plaintiffs in the lawsuit. “While I can leave my cellphone at home [and] I can leave my computer at home if I wanted to,†he said, “one of the things that I can’t really leave at home is my face.â€
Clearview AI was “circumventing the will of a lot of people†in the Bay Area cities that banned or limited facial-recognition use, he said.
Enhancing law enforcement’s ability to instantaneously identify and track individuals is potentially chilling, the plaintiffs argue, and could inhibit the members of their groups or Californians broadly from exercising their constitutional right to protest.
“Imagine thousands of police officers and ICE agents across the country with the ability to instantaneously know your name and job, to see what you’ve posted online, to see every public photo of you on the internet,†said Jacinta Gonzalez, a senior campaign organizer at Mijente. “This is a surveillance nightmare for all of us, but it’s the biggest nightmare for immigrants, people of color, and everyone who’s already a target for law enforcement.â€
The plaintiffs are seeking an injunction that would force the company to stop collecting biometric information in California. They are also seeking the permanent deletion of all images and biometric data or personal information in their databases, said Sejal R. Zota, a legal director at Just Futures Law and one of the attorneys representing the plaintiffs in the suit. The plaintiffs are also being represented by Braunhagey & Borden.
“Our plaintiffs and their members care deeply about the ability to control their biometric identifiers and to be able to continue to engage in political speech that is critical of the police and immigration policy free from the threat of clandestine and invasive surveillance,†Zota said. “And California has a Constitution and laws that protect these rights.â€
In a statement Tuesday, Floyd Abrams, an attorney for Clearview AI, said the company “complies with all applicable law and its conduct is fully protected by the 1st Amendment.â€
It’s not the first lawsuit of its kind — the American Civil Liberties Union is suing Clearview AI in Illinois for allegedly violating the state’s biometric privacy act. But it is one of the first lawsuits filed on behalf of activists and grass-roots organizations “for whom it is vital,†Zota said, “to be able to continue to engage in political speech that is critical of the police, critical of immigration policy.â€
Clearview AI faces scrutiny internationally as well. In January, the European Union said Clearview AI’s data processing violates the General Data Protection Regulation. Last month, Canada’s privacy commissioner, Daniel Therrien, called the company’s services “illegal†and said they amounted to mass surveillance that put all of society “continually in a police lineup.†He demanded the company delete the images of all Canadians from its database.
Clearview AI has seen widespread adoption of its technology since its founding in 2017. Chief Executive Hoan Ton-That said in August that more than 2,400 law enforcement agencies were using Clearview‘s services. After the January riot at the U.S. Capitol, the company saw a 26% jump in law enforcement’s use of the tech, Ton-That said.
The company continues to sell its tech to police agencies across California as well as to Immigration and Customs Enforcement, according to the lawsuit, despite several local bans on the use of facial recognition.
The San Francisco ordinance that limits the use of facial recognition specifically cites the technology’s proclivity “to endanger civil rights and civil liberties†and “exacerbate racial injustice.â€
Studies have shown that facial-recognition technology falls short in identifying people of color. A 2019 federal study concluded Black and Asian people were about 100 times more likely to be misidentified by facial recognition than white people. There are now at least two known cases of Black people being misidentified by facial-recognition technology, leading to their wrongful arrest.
Ton-That previously told The Times that an independent study showed Clearview AI had no racial biases and that there were no known instances of the technology leading to a wrongful arrest.
The ACLU, however, has previously called the study into question, specifically saying it is “highly misleading†and that its claim that the system is unbiased “demonstrates that Clearview simply does not understand the harms of its technology in law enforcement hands.â€
Renderos said that making facial recognition more accurate doesn’t make it less harmful to communities of color or other marginalized groups.
“This isn’t a tool that exists in a vacuum,†he said. “You’re placing this tool into institutions that have a demonstrated ability to racially profile communities of color, Black people in particular.... The most neutral, the most accurate, the most effective tool — what it will just be more effective at doing is helping law enforcement continue to over-police and over-arrest and over-incarcerate Black people, Indigenous people and people of color.â€