In recent years, images have been everything in technology. Developments in digitalization advance in line with the success of imaging systems to a large extent. Surely, we will trust in machines more when image systems with artificial intelligence (AI) including cameras, sensors and especially fully-autonomous car driving and security cameras, can comprehend images correctly like a human being and make sense of them. However, we need some time to reach this point.
In fact, everything started two years ago when Google's AI mistook a turtle for a rifle. This image which surely depicts a turtle for the human eye was a gun for Google's AI as the animal probably had camouflage colors and patterns. From this example, the software world is working to decrease the number of images that arouse contradiction in algorithms or rather they try to make algorithms make sense of objects correctly by training machines.
One of the studies conducted as part of this was the joint work of the University of California, Berkeley and the universities of Washington and Chicago. Researchers from these three universities created nearly 7,500 contradictory image sets. These images confused the machine imaging system so much that their accuracy rate decreased by more than 90%. The success rates of these systems became 2-3%.
Noting that they want to use this set to train machine imaging systems, the researchers named the failure of the algorithms as a "deep flaw." What confused the machines? For the researchers, color, pattern and signs, objects and shapes in the background misdirect algorithms. They say: "Algorithms focus on some specific textures, traces and details on the image instead of looking integrally." The imaging systems with AI remain incapable of looking at the world with a human eye.
However, this holistic and conceptual incapability of imaging systems helps them chase specific signs, images and details, in which they are experts, for now. This chasing works, especially in the medical field. AI is better than humans in detecting cancer tumors, even heart attacks according to the latest research.
Ai-Da, the Picasso of the robot world
Ai-Da poses with one of her works at Oxford University, U.K.
We have witnessed that Generative Adversarial Network (GAN) algorithms can be an artist. GAN revealed its own creativity after learning about thousands of painting images and their techniques, and proved to us that AI can be a potential artist.
By taking this claim a step further, Ai-Da wants to prove that a robot with AI can paint. Ai-Da is a humanoid with an ultra-realistic appearance. She was named after Ada Lovelace, who lived in the 19th century and is known as the first computer programmer in the world. Ai-Da can simultaneously paint what she sees. She is like a human being in this sense. Her software is programmed to simultaneously draw what she sees instead of being trained through paintings first and drawing what she sees then.
She is so good that she opened an exhibition titled "Unsecured Futures" at Oxford University, where she was developed. It has been said that she is inspired by the 20th century's expressionist and cubist movements in her paintings. Pablo Picasso, Kathe Kollwitz and Max Beckmann are among her muses. Intellectually, her idols are futuristic writers like Aldous Huxley and George Orwell. The design of her arms, with which she successfully performs her art, was made with the collaboration of Salaheldin Al Abd and Ziad Abass, who also developed her drawing algorithm, and Adam Meller.