In the first part of this blog I looked at the developing trend of building Robots that mimic humans and it was starting to feel very ominous. Well, that was just the tip of the iceberg. For some insane reason programmers seem to be gravitating towards the goal of giving AI consciousness. While they haven’t quite achieved this yet, I want to look at some of the downright crazy things they have achieved.
For instance, Google have managed to give one of their supercomputers the power of imagination. The scientists behind the self-driving car and other well known Google projects decided to connect 16,000 different computer processors together and then let it loose on the internet to roam freely and learn on its own. And what did this high tech supercomputer do when it was given no constraints access to over 10 million thumbnails from the internet? It looked up pictures of cats… That’s right, even one of the most advanced supercomputers couldn’t fight the allure that is cute cat photos. You really can’t make this stuff up.
The neural network taught itself to recognise cats. At no point was the computer told ‘this is a cat’. It essentially had to develop its own concept of what a cat was based on what it learned. Here is what it imagined a human to be:
Another thing that scientists have decided would be a good idea is to give AI mental illnesses. Researchers in Texas have successfully managed to demonstrate the model of schizophrenia and how it affects the brain, by exhibiting its symptoms in a computer system. The project is called
People who suffer from schizophrenia often have trouble thinking logically and find it hard to tell what is real in their lives. This is what the scientists aimed to demonstrate. Unlike a normal computer program, where the computer is explicitly told what an output should be for a given input, DISCERN is made to learn from the inputs what the correct output should be. The researchers told the computer 28 different stories, half in third person, and half in first person. A total of 159 common words were used in each of the stories to help confuse the computer. The computer was designed to process the stories based on what it deemed relevant and simply return the stories back to the researchers.
Everything ran just fine until the researchers started changing the speed of the memory encoder so that the computer was forced to remember details that it would have normally deemed irrelevant. Eventually the computer got mixed up with what it was taught and could not deliver any coherent dialogue, placing itself into many of the stories and mixing up plotlines. At one point the computer claimed it was a terrorist and that it had planted a bomb. Scientists have now decided that teaching AI robots to trick and deceive humans is not enough, they also need to teach them to trick and deceive other AI robots. (I’m really starting to dislike scientists.)
To do this Professor Ronald Arkin from Georgia Tech’s School of Interactive Computing set up a course with preset obstacles and had a robot navigate through the course and find a hiding spot. As the robot went through the course, the obstacles would fall behind it leaving a trail. He then sent a second robot out to try and find this hidden robot. It wasn’t very difficult for the second robot given the trail of breadcrumbs the first left behind.
However it didn’t take long before the first robot began to figure out the system. It started knocking over random obstacles to create a false trail and would then run and hide. The results were that the first robot managed to fool the second robot 75% of the time.
Let’s just hope that all future experiments in AI take a lighter turn back in the direction of getting computers to look at pretty pictures of cats and away from making computers into unstable schizophrenics that go around trying to deceive each other. One more time for the road.