Facebook found itself at the center of controversy once again after its artificial technology (AI) mislabeled a video of black men confronting white police officers and civilians as "about primates."
After social media users finished the clip, published by the Daily Mail earlier this week, they received a prompt asking if they would like to "keep seeing videos about primates.”
"This was clearly an unacceptable error and we disabled the entire topic recommendation feature as soon as we realized this was happening so we could investigate the cause and prevent this from happening again,” Facebook spokesperson Dani Lever said in a statement to USA Today.
"As we have said, while we have made improvements to our AI, we know it’s not perfect and we have more progress to make,” she said. "We apologize to anyone who may have seen these offensive recommendations.”
The incident is the latest in a series of racial blunders online, caused by what is seeming racial bias in automated systems.
According to recent studies, facial recognition technology can be prejudiced against people of color and typically has more trouble identifying them. It has resulted in incidents where black people have been discriminated against or arrested due to a computer error.
In 2015, Google similarly had to apologize after its Photos application mistakenly identified black people as "gorillas.” Later the same year, Microsoft offered a mea culpa after its AI chatbot Tay began spouting racial slurs and had to be pulled offline.
Please click to read our informative text prepared pursuant to the Law on the Protection of Personal Data No. 6698 and to get information about the cookies used on our website in accordance with the relevant legislation.
6698 sayılı Kişisel Verilerin Korunması Kanunu uyarınca hazırlanmış aydınlatma metnimizi okumak ve sitemizde ilgili mevzuata uygun olarak kullanılan çerezlerle ilgili bilgi almak için lütfen tıklayınız.