Facebook to end facial recognition system, delete faceprints
Facebook CEO Mark Zuckerberg is seen after fencing in the "Metaverse" with an Olympic gold medal fencer during a live-streamed virtual and augmented reality conference to announce the rebrand of Facebook as Meta, in this screen grab taken from a video released Oct. 28, 2021. (Facebook/Handout via REUTERS)


In a major shift in data collection and digital privacy, Facebook will shut down its facial recognition system and erase the faceprints of over one billion people, the company said Tuesday.

"This change will represent one of the largest shifts in facial recognition usage in the technology’s history," said a blog post Tuesday from Jerome Pesenti, vice president of artificial intelligence for Facebook’s new parent company, Meta. "Its removal will result in the deletion of more than a billion people’s individual facial recognition templates."

He said the company was trying to weigh the positive use cases for the technology "against growing societal concerns, especially as regulators have yet to provide clear rules."

Facebook’s about-face follows a busy few weeks for the company. On Thursday it announced a new name – Meta – for the company, but not the social network. The new name, it said, will help it focus on building technology for what it envisions as the next iteration of the internet – the "metaverse."

The company is also facing perhaps its biggest public relation crisis to date after leaked documents from whistleblower Frances Haugen showed that it has known about the harms its products cause and often did little or nothing to mitigate them.

More than a third of Facebook’s daily active users have opted in to have their faces recognized by the social network’s system. That’s about 640 million people. But Facebook has recently begun scaling back its use of facial recognition after introducing it more than a decade ago.

The company in 2019 ended its practice of using face recognition software to identify users’ friends in uploaded photos and automatically suggesting they "tag" them. Facebook was sued in Illinois over the tag suggestion feature.

The decision "is a good example of trying to make product decisions that are good for the user and the company," said Kristen Martin, a professor of technology ethics at the University of Notre Dame. She added that the move also demonstrates the power of regulatory pressure, since the face recognition system has been the subject of harsh criticism for over a decade.

Researchers and privacy activists have spent years raising questions about the technology, citing studies that found it worked unevenly across boundaries of race, gender or age.

Concerns also have grown because of increasing awareness of the Chinese government’s extensive video surveillance system, especially as it’s been employed in a region home to one of China’s largely Muslim ethnic minority populations.

Some United States cities have moved to ban the use of facial recognition software by police and other municipal departments. In 2019, San Francisco became the first U.S. city to outlaw the technology, which has long alarmed privacy and civil liberties advocates.

At least seven states and nearly two dozen cities have limited government use of the technology amid fears over civil rights violations, racial bias and invasion of privacy. Debate over additional bans, limits and reporting requirements has been underway in about 20 state capitals this legislative session, according to data compiled by the Electronic Privacy Information Center in May of this year.

Meta’s newly wary approach to facial recognition follows decisions by other U.S. tech giants such as Amazon, Microsoft and IBM last year to end or pause their sales of facial recognition software to police, citing concerns about false identifications and amid a broader U.S. reckoning over policing and racial injustice.

President Joe Biden’s science and technology office in October launched a fact-finding mission to look at facial recognition and other biometric tools used to identify people or assess their emotional or mental states and character.

European regulators and lawmakers have also taken steps toward blocking law enforcement from scanning facial features in public spaces, as part of broader efforts to regulate the riskiest applications of artificial intelligence.

Facebook’s face-scanning practices also contributed to the $5 billion fine and privacy restrictions imposed by the Federal Trade Commission in 2019. Facebook’s settlement with the FTC after the agency’s yearlong investigation included a promise to require "clear and conspicuous" notice before people’s photos and videos were subjected to facial recognition technology.