Humanity remakes itself

In the early 20th century, robots were seen as nothing but metal, yet as scientists have started making them more and more in the image of humans and almost as intelligent, the most crucial question comes to mind: Can we trust them?



Born in 1920 in Petrovichi, Russia, Isaac Asimov immigrated to the U.S. with his family when he was just two years old. Asimov, the son of a candy shop owner, somehow obtained a typewriter at the age of 15 and started writing short science-fiction stories.

That's because his father only allowed him to read science-fiction stories with the belief that they were about science and therefore educational. There started a literary career, rather than a scientific one; however it was one whose reach went beyond science and formed our understanding of what a robot might be.

The origin of the word "robot" is also interesting. While Asimov and others after him popularized the word, its origins come from the Czech word "robota," which means "forced labor."Between the ages of 18 and 21, Asimov discovered his love of robots and writing about them. He introduced the "Three Laws of Robotics," a set of rules that are supposed to govern robots as they start interacting with humans.

Together, the editor of the magazine Astounding Science Fiction John Campbell and Isaac Asimov came up with these rules:

l The First Law: A robot may not harm a human through action or inaction.

The Second Law: A robot must obey orders given by a human, except where such orders conflict with the First Law.

The Third Law: A robot must protect its own existence, except if such protection conflicts with First and Second Law.

Before Asimov the world had conceived robots as Frankensteins or metal monsters, but Asimov made them more human, domestic and labor-saving devices. He therefore set in motion a world where humanity imagined remaking itself. While there are millions of industrial robots making cars, toys, shoes and other artifacts that do not have the physical appearance of humans.

We are much more startled and pay more attention to those robots that somehow look like humans or animals. The humanoid robot ASIMO created by Honda in 2000 has the ability to recognize sounds, faces, and gestures, and therefore, can somewhat meaningfully interact with humans. ASIMO has traveled around the world and performed in front of audiences and has taken their breaths away.

However, physical appearance is not the only thing that makes a non-human to appear human. Intelligence is much more important than having two arms and two legs. This fact was clear to Alan Turing, the inventor of computer science theory who envisioned a test now called the "Turing Test." A human judge poses natural language questions (written as text) to a computer and another human hidden behind curtains, and they both answer. If the judge cannot distinguish which one is the computer, the computer passes "the intelligence test."

Asimov's robots were actual robots even as they interact with humans, and their potentially harmful behavior is curbed by the three laws. However, robots with human intelligence levels may be a completely different story. But how do we "insert" intelligence into robots, whether they have the physical appearance and agility of humans or not.

Finally, we have discovered how to do this. It is called "deep learning" and has completely transformed artificial intelligence (AI) research.

From the beginning in the 1960s, there were two schools of thought on how to build intelligent machines. The first group believed we could write code that will apply rules of logic to make decisions, and lets the robot to set its directions in the labyrinth of interactions.

While the second group believed intelligence would emerge if machines follow the path of biology, and learn by observing and experiencing. The second group was suggesting an approach that went diametrically opposed to computer programming; instead of the programmer creating an algorithm and writing commands to follow the algorithm, the program itself (the machine) generates its own algorithm by seeing example date and desired output.

"The machine programs itself" was a revolutionary idea but it was too early for its time. It required a vast amount of memory for input and output data, and a network of reconfigurable tiny computers (neurons) to shape themselves as to the learned algorithm. Neither was available in the 1970s or even 1980s.

By the 1990s however special classes of reconfigurable computers (called artificial neural networks) were sufficiently advanced to perform certain deep learning tasks, such as recognizing handwritten text.

Another decade and half later, we can build neural networks with millions of nodes, and large datasets have become available to train them. Deep learning is responsible for today's explosion of AI applications. It is now being used to make key decisions in many areas of medicine, finance, and perception.

However, we have a problem. The algorithms formed by datasets and desired outputs are highly opaque structures. After being trained on a million pairs of inputs and outputs, the machine starts making sensible decisions on new inputs; however, our models on its behavior are incomplete.

We build machines that work but we don't know how they work. Deep learning is a black box, some would call dark magic. One cannot look inside a deep neural network. Its reasoning is embedded in the behavior of millions of artificial neurons, arranged into dozens or even hundreds of intricately interconnected layers.

While there are ingenious ways devised trying to recapture how they work, our models are far from being complete. As a final word, this warning from the philosopher Daniel Dennett is appropriate, "If we're going to use these things and rely on them, then let's get as firm a grip on how and why they're giving us the answers as possible," he says. "If it can't do better than us at explaining what it's doing, then don't trust it."