The use of artificial intelligence in corporate governance has been widely accepted as beneficial, preventing employers from only using personal perspectives in decisions while facilitating objective decisions. What if artificial intelligence algorithms were prejudiced? For example, an algorithm on Google, educated by news articles, became sexist and wrote news with a masculine perspective. Similar instances were witnessed in Amazon.
It was discovered that Amazon's algorithm designed to choose the best candidates for recruitment actually gives lower points to women. The algorithm gives points between one and five to applicants. Analyzing the job applications of Amazon since 2014, the algorithm is programmed to choose the best five among 100 curriculum vitae (CV). However, company officials discovered that it was deciding according to gender for jobs relating to software development and other technical positions. The software, developed by men who have worked with the company for years, was found to predominantly choose male applicants. Negatively noting the word "woman" and phrases like "women's basketball team" and "women's solidarity club" in CVs, the algorithm also gave low points to two people who graduated from all-girls high schools.
Are recruitment algorithms safe?
Reuters reported that the Amazon team that worked on its software was dispersed at the beginning of the last year and hoped that objective results were now being produced for the two sexes now that the problem has been recognized. Exposing Amazon's human resources issue for the first time, Reuters reported that the final decisions for hiring are made by unidentified Amazon officials and that the computer program only actually offered educated suggestions. In the end Reuters wrote, "Amazon decided that the machine learning technique in the software basis can be used limitedly."A survey conducted last year by the software company CareerBuilder showed that 55 percent of human resources managers in the U.S. thought that artificial intelligence would be a part of their business in the next five years. However, an evaluation of Amazon's experience by Hilton Worldwide Holdings Inc. and the Goldman Sachs Group, which employ thousands of people and want to fully automate their recruitment processes, came to the fore. It looks like giant institutions who want to utilize artificial intelligence in their recruitment processes need to research the results of the software before making a decision.
Offering insight on the issue, associate professor Deniz Kılınç, the head of Celal Bayar University's Software Engineering Department in Manisa, said, "The exclusion of artificial intelligence applications that could make our lives easier is very sad for those working in the field of research."
However, she is optimistic that new studies for developing artificial intelligence without prejudice means there is still hope.
"The scientifically positive aspect of this incident is that further research on the creation of nonbiased artificial intelligence systems will be conducted. Recently, it has become obvious data that can create biases against race, language, religion and sex is not being filtered in the programming processes for artificial intelligence systems. Discussions on the standardization of artificial intelligence systems with more balanced datasets have started. Despite how effective the results being produced seem, the only way to guarantee the reliability of artificial intelligence systems is to create human-supported, semiautomatic systems and correcting wrong decisions through programming," she explained.
According to Kılınç, the potential benefits that artificial intelligence applications provide in the field of human resources still outweigh the drawbacks.
"It is necessary to give a chance to artificial intelligence applications, which can technically analyze applicants using visual and audio processing techniques in interviews, and can give real-time answers to the questions of thousands of applicants simultaneously using chat robots in the recruitment process," she added.
Algorithms to measure ability
A study conducted in the U.S. by a group of scientists, including biomedical engineer Meryem Yücel, an academician at Harvard University School of Medicine's Department of Radiology, showed that the ability of humans in any subject could be measured by algorithms. This method, which can be described as a milestone in the self-education of a person in his/her work, was published in the scientific journal Science Advances, showing how machine learning can reveal ability.
The experts, who conducted the study, monitored 30 surgeons conducting operations with a device called Near InfraRed Spectroscopy, the spectroscopic method used for the compositional, functional and sensory analysis of ingredients, intermediates and final products.
The study found that highly skilled surgeons had high-functioning motor cortexes, while those with less skill had more active prefrontal cortexes - the part of the brain that controls complex planning. Based on this data, experts concluded that less-skilled or inexperienced doctors think more during surgery, while more experienced surgeons seem more relaxed. The study allows the system to be used for educational purposes in virtually every occupation.