Microsoft says it fixed racially-biased facial recognition tools
Screenshot from the website of Microsoft Azure facial recognition system.


Microsoft said Tuesday that its facial recognition system is getting better at identifying people with darker skin tones and women, after the tools' failure to identify nonwhite, female faces exposed the company's racial bias in its tech development.

In a blog post Tuesday, Microsoft said the error rates for identifying darker skinned people decreased by as much as 20 times.

Error rates in identifying women were also reduced by nine times, the post said.

The company said it has been training its artificial intelligence (AI) tools with a more diverse dataset.

"If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases," senior researcher Hanna Wallach wrote.

"The Face API team made three major changes. They expanded and revised training and benchmark datasets, launched new data collection efforts to further improve the training data by focusing specifically on skin tone, gender and age, and improved the classifier to produce higher precision results," Wallach said.

In a report released February, MIT's Media Lab revealed test results on facial recognition tools created by Microsoft, IBM and the Chinese company Megvii. It found that the tools misidentified the gender of up to 35 percent of darker-skinned women.

Microsoft's software in particular had an error rate as high as 20 percent when attempting to identify men and women with darker skin. In contrast, the tools had a zero-percent error rate in identifying "lighter male faces."

The test results revealed that like many other tech companies, Microsoft was using datasets with a disproportionate lack of dark-skinned and female images. It also confirmed that AI systems can adopt biases that stem from systematic racism, including limited datasets.

In 2015, Google's tools identified a group of black friends as "gorillas," an incident that both sparked outcry and rose awareness about technology's relationship with societal bias.

While Microsoft's self-proclaimed improvement may be a step in the right direction, it is yet to be seen how the tools will be used, especially relating to minorities.

In January, Microsoft announced that U.S. Immigration and Customs Enforcement (ICE) would begin using its Azure Government Cloud service, including the facial recognition tools.

As the ICE comes under fire for its inhumane treatment of migrant children, Microsoft CEO Satya Nadella stated last week that "Microsoft is not working with the U.S. government on any projects related to separating children from their families at the border." However, he didn't clarify how ICE is using the facial recognition tools in its work to regulate immigration.