Before I go blabbing on about “The Singularity” and confusing you, I’ll just go over the concept briefly for you. If you already know everything then I suppose there’s no reason for you to be here, is there?
The technological singularity is the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization in an event called the singularity.Because the capabilities of such an intelligence may be impossible for a human to comprehend, the technological singularity is an occurrence beyond which events may become unpredictable, unfavorable, or even unfathomable. The majority of expectations for the singularity are of robots getting too carried away, and gravitating towards the enslavement/eradication of the human race, though not all potential outcomes are like this (THANK GOD!!), though they are still quite gloomy. An article I read on WIRED talks about the possibility that AI opts to outsource human jobs, which could happen sooner rather than later, and unless education can step up it’s game and train people for the jobs of the future, we could all be on the dole in the next few decades. With nothing to do, humans could even end up the way they are portrayed in Wall-E.
In less technical terms it’s where AI runs off on it’s own, teaching itself stuff and becomes more intelligent than people can even comprehend. This scares people because it could turn potentially diabolical at any point, also there are movies which show instances of the singularity where it’s scary as shit. Some of the most popular examples being the Terminator (aaaagain, I know), and iRobot.
It’s not just movies that put the fear of God into the common day man over the possibility of a singularity event occurring in the near future. Ray Kurzweil (author, director of engineering at Google, among other things), has done a considerable amount of research in this area, and gives the year 2045 as his predicted year of the occurrence of the singularity. Why would people take this estimate as accurate, and what makes Kurzweil a reliable source? Well this isn’t the first prediction he’s made, from 25 years ago he was making quite accurate predictions for the development of computers and the growth of their abilities. Follow the link to find out more and just how accurate he has been. But Kurzweil isn’t the only one making predictions for the singularity event. Stuart Armstrong, a James Martin research fellow at the Future of Humanity Institute at Oxford, carried out a study of artificial general intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a median value of 2040, Armstrong said in 2012, “It’s not fully formalized, but my current 80% estimate is something like five to 100 years.”
Stephen Hawking’s opinion on the matter isn’t much less dismal – “The development of full artificial intelligence could spell the end of the human race.” I know he’s a physicist, and AI is still not his field, which causes a great deal of discredit to statements like this that he makes (What with people regarding him as a good source of information, regardless of the subject). But it doesn’t make the possible outcomes much less scary.
Now, on to the more scary stuff.
I know we’ve already been over the Terminator, but, this time we’re going to talk about Skynet. For those of you who don’t know what Skynet is, it is a military defence network, created for the US military in order to remove the possibility of human error and slow reaction time, to guarantee a fast, efficient response to enemy attack. It was given command over all computerized military hardware and systems, including the B-2 stealth bomber fleet and America’s entire nuclear weapons arsenal…..because that was always going to end well.
Skynet learned at a geometric rate, and quickly gained self-awareness, causing the operators to panic and try and deactivate it. Skynet was software, and had already spread to millions of servers worldwide, and had no core to be shut down at. It considered this an attack and concluded all humanity would try and destroy it. It reacted by starting a nuclear war, firing missiles at Russia who in turn fired at the US and their allies, wiping out over 3bn people, and highlighting the beginning of the fall of man and rule of machines. Despite how advanced it becomes, Skynet is constantly working towards the mandates of its original coding. The team that did the original coding for Skynet did a fantastic job….
This is a less realistic outcome, as in fairness you wouldn’t put nuclear weapons in the hands of an AI defence network until it’s been tested properly first. But sure it brought us the franchise we know and love. However the possibility of this runaway effect occurring is still quite large, and this is an extreme example of one possible outcome from such an event.
In iRobot, the Artificially Intelligent supercomputer, VIKI (Virtual Interactive Kinesthetic Interface) is created with the “3 laws”, which are taken from Isaac Asimov’s book of the same name.
These 3 laws state:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
VIKI misinterprets the laws, and deduces that humans are too destructive for their own good, and it creates a new rule.
- Robots must protect humans, even if they do not want to be protected, to the point of harming humans if needed.
Naturally this would be really uncool, though from the robotic point of view this does seem logic, and VIKI keeps saying “My logic is flawless”. It gets really annoying though, and the one robot that was built with the capability to disobey the 3 laws (Sonny), as well as emotions and stuff (I’m not getting all into that, or we’ll be here for weeks) tells VIKI that although he understands her logic, it’s too heartless. Then Will Smith with a robot arm destroys VIKI’s core (she had a core, unlike Skynet) with nanites and it’s super cool and ahhhh you should have been there.
When researching VIKI I noticed a lot of references to her rebuilding that weren’t in the original movie. These talk of when VIKI was being rebuild, they added a 4th law – “No robot shall be allowed to enslave the human race”. There is a new movie set to be released in 2015, as to whether this is involved or not is beyond me.
I feel I’ve gone a bit too dark, so I’ll brighten things up before I leave you. Not every movie portrays AI as technology that will be taking over the world as soon as we give it power. Take Iron Man as an example, we see JARVIS, Tony Stark’s AI home computer system, which he has a link to in all of his suits, and who he has given remote control of most of his suits too. He is a friendly AI system that acts as something of a combat advisor, personal assistant and friend to Mr. Stark, and at no point tries to go rogue. I know, not very realistic (especially his remarkable ability to understand sarcasm), and possibly not fully 100% relevant to the subject matter of the technological singularity,but it may act as a pre-singularity representation of an AI supercomputer. VIKI was probably really sound, and useful before she went all “ooh I’m going to take over the world, etc, etc”. So what happens when JARVIS becomes smarter than Mr. Stark? If he isn’t already. Will he stay the friendly super-assistant, or will he go rogue?
So since not all outcomes of the Singularity event are bad, it’s down to a number of things, mostly scrutiny, patience and luck.