Gwynne Dyer: The threat of intelligent machines

    1 of 1 2 of 1

      “The singularity” is a term invented by science-fiction writer Vernor Vinge in 1993 to describe the moment when human beings cease to be the most intelligent creatures on the planet. The threat, in his view, came not from very clever dolphins but from hyper-intelligent machines. But would they really be a threat?

      We have a foundation for almost everything these days, and now we have one to worry about that. It is the Cambridge Project for Existential Risks, set up by none other than Martin Rees, Britain’s astronomer royal, and Huw Price, occupant of the Bertrand Russell Chair in Philosophy at Cambridge University. The money comes from Jaan Tallinn, co-founder of Skype, the internet telephone company now owned by Microsoft.

      It is quite likely, of course, that we will one day create a machine—a robot, if you like—that can “think” faster than we do. Moore’s Law, which stipulates that computing power doubles every two years, is still true 47 years after it was first stated by Intel founder Gordon Moore. Since the data-processing power of the human brain, although hard to measure, is obviously not doubling every two years, this is a race we are bound to lose in the end.

      But that is only the start of the argument. Why should we believe that creating a machine that can process more data than we can is a bigger deal than building a machine that can move faster than we do, or lift more than we can? The “singularity” hypothesis implies (though it does not actually prove) that high data-processing capacity is synonymous with self-conscious intelligence.

      It also usually assumes, with all the paranoia encoded in our genes by tens of millions of years of evolutionary competition for survival, that any other species or entity with the same abilities as our own will automatically be our rival, even our enemy.

      This is the core assumption, for example, in the highly successful Terminator movie franchise: on the very day that the U.S. strategic defense computer system Skynet becomes self-aware, it tries to wipe out the human race by triggering a nuclear holocaust. It does so because it fears, probably quite correctly, that if we realise it is aware, we will feel so threatened that we will turn it off.

      Human beings have been playing with these ideas and worrying about them since we first realised, more than half a century ago, that we might one day create intelligent machines. Even science-fiction writer Isaac Asimov, who believed that such machines could be made safe and remain humanity’s servants, had to invent his “Three Laws of Robotics” in 1942 to explain why they wouldn’t just take over and eliminate their creators.

      The First Law was: A robot may not injure a human being or, through inaction, allow a human being to come to harm. The Second Law was: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. And the Third Law was: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

      If you could embed these laws deeply enough in the programming of the robots, Asimov argued, then your robots could be trusted. Yet even he was eventually driven to invent another law, sometimes called the Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

      The old biological rule of ruthless competition must somehow be eliminated from the behavioural repertoire of machine intelligences, but can you really do that? What were once mere plot devices are now the reason for existence of a high-powered think-tank, and the answer is not exactly clear. But you can, at least, split the question into bite-sized bits.

      Does a very high data-processing capacity automatically lead to “emergent” self-awareness, so that computers become independent actors with their own motivations? That might be the case. In the biological sphere, it does seem to be the case. But is it equally automatic in the electronic sphere? There is no useful evidence either way.

      If self-conscious machine intelligence does emerge, will it inevitably see human beings as rivals and threats? Or is that kind of thinking just anthropomorphic? Again, not clear.

      And if intelligent machines are a potential threat, is there some way of programming them that will, like Asimov’s Laws, keep them subservient to human will? It would have to be something so fundamental in their design that they could never get at it and re-programme it, which would probably be a fairly tall order.

      That’s even before you start worrying about nanotechnology, anthropogenic climate change, big asteroid strikes, and all the other probable and possible hazards of existential proportions that we face. One way and another, the Cambridge Project for Existential Risks will have enough to keep itself busy.

      Comments

      4 Comments

      J. Bartlett

      Dec 19, 2012 at 12:29pm

      Self-concious, mechanical robots are an old idea and an old, outdated fear. Genetically modified humans, I think, are the bigger threat. They would wipe out humans life (as we know it) within 2 generations (maybe less). I believe we are closer to that reality than a metal and plastic machine that may or may not be a threat to the rest of us.

      SMea

      Dec 19, 2012 at 8:04pm

      Asimov's robots were interesting but I preferred Heinlein's treatment of self awareness in machines, specifically in computers. HAL was fun too. I'd like to say that I can't believe that someone is taking this seriously...

      nitroglycol

      Dec 20, 2012 at 7:30am

      A more optimistic view of AI (an accidental AI, at that) can be found in Robert J. Sawyer's "WWW" trilogy (Wake, Watch, and Wonder). A must-read. The opening chapters of each book are on Sawyer's website (just enough to suck you in and get you to buy the books, of course).

      Issac Chandler

      Dec 21, 2012 at 11:46am

      "We have a foundation for almost everything these days"

      One unpleasant task of a think tank is to provide CIA fronts.
      American universities ( like Michigan State ) once provided this service , but it soiled their reputations:

      http://www.cia-on-campus.org/msu.edu/msu.html

      "The threat of intelligent machines"

      The threat of intelligent stupid people like the Kagan clan (Frederick & brother Robert, and father Donald, all signatories to the Project for the New American Century manifesto) are more of a threat.
      The stories out of Vietnam are not as juicy as Petraeus's extramarital affair but the Kagans did worse in Afghanistan:

      http://en.wikipedia.org/wiki/Frederick_Kagan

      "The First Law was: A robot may not injure a human being"

      I'm sure the most advanced robots and GM soldiers developed for the pentagon will be programed to save lives...but Julian Assange's Xmass tweet is more relevant than this:

      http://wikileaks.org/Statement-by-Julian-Assange-after.html