Facebook to shut down facial recognition system and delete data
Facebook said it will shut down its facial-recognition system and delete the faceprints of more than 1 billion people.
Facebook said Tuesday that it will shut down its facial recognition system and delete the faceprints of more than 1 billion people amid growing concerns about the technology and its misuse by governments, police and others.
“This change will represent one of the largest shifts in facial recognition usage in the technology’s history,†said a blog post from Jerome Pesenti, vice president of artificial intelligence for Facebook’s new parent company, Meta. “More than a third of Facebook’s daily active users have opted in to our Face Recognition setting and are able to be recognized, and its removal will result in the deletion of more than a billion people’s individual facial recognition templates.â€
He said the company was trying to weigh the positive use cases for the technology “against growing societal concerns, especially as regulators have yet to provide clear rules.â€
Facebook’s about-face follows a busy few weeks. On Thursday it announced its new name Meta for Facebook the company, but not the social network. The change, it said, will help it focus on building technology for what it envisions as the next iteration of the internet — the “metaverse.â€
The company is also facing perhaps its biggest public relations crisis to date after leaked documents from whistleblower Frances Haugen showed that it has known about the harms its products cause and often did little or nothing to mitigate them.
Facebook didn’t immediately respond to questions about how people could verify that their image data were deleted, or what it would be doing with the underlying technology.
About 640 million daily active users have opted in to having their faces recognized by the Facebook system.
Facebook had already been scaling back its use of facial recognition after introducing it more than a decade ago.
The company in 2019 ended its practice of using facial recognition software to identify users’ friends in uploaded photos and automatically suggesting they “tag†them. Facebook was sued in Illinois over the tag suggestion feature.
The decision “is a good example of trying to make product decisions that are good for the user and the company,†said Kristen Martin, a professor of technology ethics at the University of Notre Dame. She added that the move also demonstrates the power of public and regulatory pressure, since the facial recognition system has been the subject of harsh criticism for more than a decade.
Meta Platforms Inc., Facebook’s parent company, appears to be looking at new forms of identifying people. Pesenti said Tuesday’s announcement involves a “company-wide move away from this kind of broad identification, and toward narrower forms of personal authentication.â€
“Facial recognition can be particularly valuable when the technology operates privately on a person’s own devices,†he wrote. “This method of on-device facial recognition, requiring no communication of face data with an external server, is most commonly deployed today in the systems used to unlock smartphones.â€
Apple uses this kind of technology to power its Face ID system for unlocking iPhones.
Researchers and privacy activists have spent years raising questions about the tech industry’s use of face-scanning software, citing studies that found it worked unevenly across boundaries of race, gender or age. One concern has been that the technology can incorrectly identify people with darker skin.
Facial recognition’s first blanket ban arrived in May, when San Francisco became the only city in the nation to bar police and other agencies from using the technology.
Another problem with face recognition is that in order to use it, companies have had to create unique faceprints of huge numbers of people — often without their consent and in ways that can be used to fuel systems that track people, said Nathan Wessler of the American Civil Liberties Union, which has fought Facebook and other companies over their use of the technology.
“This is a tremendously significant recognition that this technology is inherently dangerous,†he said.
Concerns also have grown because of increasing awareness of the Chinese government’s extensive video surveillance system, especially as it’s been employed in a region that is home to one of China’s largely Muslim ethnic minority populations.
At least seven states and nearly two dozen cities have limited government use of the technology amid fears over civil rights violations, racial bias and invasion of privacy. Debate over additional bans, limits and reporting requirements has been underway in about 20 state capitals this legislative session, according to data compiled by the Electronic Privacy Information Center in May of this year.
Amazon faces questions from senators over a reported contract with Dahua, a Chinese security camera company that touted on its website the ability to alert police when its facial recognition software identifies members of the Uighur ethnic group.
Facebook’s huge repository of images shared by users helped make it a powerhouse for improvements in computer vision, a branch of artificial intelligence. Now, many of those research teams have been refocused on Meta’s ambitions for augmented reality technology, as the company hopes its future users strap on goggles to experience a blend of virtual and physical worlds. Those technologies, in turn, could pose new concerns about how people’s biometric data are collected and tracked.
Meta’s newly wary approach to facial recognition follows decisions by other U.S. tech giants such as Amazon, Microsoft and IBM last year to end or pause their sales of facial recognition software to police, citing concerns about false identifications and amid a broader U.S. reckoning over policing and racial injustice.
More to Read
Sign up for Essential California
The most important California stories and recommendations in your inbox every morning.
You may occasionally receive promotional content from the Los Angeles Times.