Gwynne Dyer: The threat of artificial intelligence

    1 of 1 2 of 1

      The experts run the whole gamut from A to B, and they’re practically unanimous: artificial intelligence is going to destroy human civilization.

      Expert A is Elon Musk, polymath co-founder of PayPal, manufacturer of Tesla electric cars, creator of Space X, the first privately funded company to send a spacecraft into orbit, and much else besides. “I think we should be very careful about Artificial Intelligence (AI),” he told an audience at the Massachusetts Institute of Technology in October. “If I were to guess what our biggest existential threat is, it’s probably that.”

      Musk warned AI engineers to “be very careful” not to create robots that could rule the world. Indeed, he suggested that there should be regulatory oversight “at the national and international level” over the work of AI developers, “just to make sure that we don’t do something very foolish.”

      Expert B is Stephen Hawking, the world’s most famous theoretical physicist and author of the best-selling unread book ever, “A Short History of Time”. He has a brain the size of Denmark, and last Monday he told the British Broadcasting Corporation that “the development of full artificial intelligence could spell the end of the human race.”

      Hawking has a motor neurone disease that compels him to speak with the aid of an artificial speech generator. The new version he is getting from Intel learns how Professor Hawking thinks, and suggests the words he might want to use next. It’s an early form of AI, so naturally the interviewer asked him about the future of that technology.

      A genuinely intelligent machine, Hawking warned, “would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete and would be superseded.” So be very, very careful.

      Musk and Hawking are almost 50 years behind popular culture in their fear of rogue AI turning against human beings (HAL in “2001: A Space Odyssey”). They are a full thirty years behind the concept of a super-computer that achieves consciousness and instantly launches a war of extermination against mankind (Skynet in the Terminator films).

      Then there’s The Matrix, Blade Runner and similar variations on the theme. It’s taken a while for the respectable thinkers to catch up with all this paranoia, but they’re there now. So everybody take a tranquiliser, and let’s look at this more calmly. Full AI, with capacities comparable to the human brain or better, is at least two or three decades away, so we have time to think about how to handle this technology.

      The risk that genuinely intelligent machines which don’t need to be fed or paid will eventually take over practically all the remaining good jobs—doctors, pilots, accountants, etc.—is real. Indeed, it may be inevitable. But that would only be a catastrophe if we cannot revamp our culture to cope with a great deal more leisure, and restructure our economy to allocate wealth on a different basis than as a reward for work.

      Such a society might well end up as a place in which intelligent machines had “human” rights before the law, but that’s not what worries the sceptics. Their fear is that machines, having achieved consciousness, will see human beings as a threat (because we can turn them off, at least at first), and that they will therefore seek to control or even eliminate us. That’s the Skynet scenario, but it’s not very realistic.

      The saving grace in the real scenario is that AI will not arrive all at once, with the flip of a switch. It will be built gradually over decades, which gives us time to introduce a kind of moral sense into the basic programming, rather like the innate morality that most human beings are born with. (An embedded morality is an evolutionary advantage in a social species.)

      Our moral sense doesn’t guarantee that we will always behave well, but it certainly helps. And if we are in charge of the design, not just blind evolution, we might even do better. Something like Isaac Asimov’s Three Laws of Robotics, which the Master laid down 72 years ago.

      First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

      Second Law: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

      Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

      Not a bad start, although in the end there will inevitably be a great controversy among human beings as to whether self-conscious machines should be kept forever as slaves. The trick is to find a way of embedding this moral sense so deeply in the programming that it cannot be circumvented.

      As Google's director of engineering, Ray Kurzweil, has observed, however, it may be hard to write an algorithmic moral code strong enough to constrain and contain super-smart software.

       We probably have a few decades to work on it, but we are going to go down this road – the whole ethos of this civilisation demands it – so we had better figure out how to do that.

      Gwynne Dyer is an independent journalist whose articles are published in 45 countries.

      Comments

      23 Comments

      WASHINGTONY

      Dec 3, 2014 at 12:39pm

      The "problem" is that any intelligent ethical machine will determine that humans lack all but a glimmer of intelligence and ethics.

      Alex T

      Dec 3, 2014 at 2:07pm

      It's ridiculous to say there's unanimity when neither Hawking nor Musk are experts in AI. And the three laws sound fine but we aren't even on track for building a system that would recognize them, let alone know how to interpret them in ways we'd recognize.

      As scary and fun as this speculation might be, there's very little chance that it will come to pass. No research group has been able to replicate the intelligence of a zebra fish let alone a mouse or a human. We aren't even on a path towards doing this. The best demos of what people think of as AI might be something like Watson (IBM's entry into Jeopardy) but it doesn't have anything like what we'd call "intelligence".

      Sleep easy, and Dyer should stick to politics. If he is going to dip into science, find actual experts.

      RUK

      Dec 3, 2014 at 2:24pm

      It's fashionable to be nihilistic, Washingtony, but let's have some perspective. Our species has never really been at a level of comfort until the last say hundred years. We are now so comfortable, at least in the west, that we are suffering from obesity, a disease of surfeit. If Maslow was right, then as a culture we can look up from tooth 'n nail personal survival concerns and start working on what is good for everyone. And indeed the last few decades has seen an increasing awareness of human rights, and not even for humans. The Declaration of Universal Human Rights is a pretty bright glimmer. The Convention Against Torture is another. Peace and reconciliation work in South Africa, Congo, Rwanda, Sri Lanka... people aren't ALL bad. Maybe we can build robots that are good folks.

      ...Then again, I have to ask the Datt brothers, what the hell were you thinking, naming your telecommunications company "Skynet"? What the hell, Datt Brothers???

      William

      Dec 3, 2014 at 2:24pm

      I can't see the faintest possibility of constraining intelligent software by rules, even if we managed to draw up a set that we all agree on and somehow arranged for universal enforcement of them (two very doubtful assumptions).

      Rule-based models of AI development with their optimised search trees etc. belong back in the days of the early chess-playing computers - fine for very restricted environments, but not a lot of use in the real world.

      Machine knowledge, sensing, reasoning, planning, and learning are where it's at now, and each advance in these areas represents a step away from human control. If we can't determine the data inputs (and we can't), then we can't determine what the machine learns and thus what knowledge it gains and hence how it reasons or plans.

      For me, the crucial loss of human control will come when the machines take over the design of their successors. The Jesuit's motto was "Give me the child until the age of seven and I will give you the man", but not even they made the claim "Give me the child until the age of seven and I will give you his great-great-great grandchildren".

      a. c. macauley

      Dec 3, 2014 at 6:57pm

      @Alex T

      "Very little chance that it will come to pass" is just another way of saying it will take a little while longer.

      I'm not losing sleep over the risk, but I also know that in my grandparents' day a computer was a person that did math equations as their day job.

      I Chandler

      Dec 3, 2014 at 7:09pm

      DYER:
      "First Law: A robot may not injure a human being.
      Second Law: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law."

      Drones have violated the second law for decades - The Pentagon considered awarding war medals to those drones that violate those very laws:

      http://www.salon.com/2012/07/10/bravery_and_drone_pilots/

      "A drone pilot Lt. Col. Matt Martin recounts in his book Predator, operating a drone is “almost like playing the computer game Civilization – something straight out of a sci-fi novel."

      David English

      Dec 3, 2014 at 7:21pm

      In evolutionary terms, AI will be our descendant. It is inevitable. I would say it is our destiny. This is not to say that our particular species of intelligence will become extinct, that's not likely, but that intelligence will move far beyond what we are capable of. Ultimately, there is little to fear from this intelligence as about the worst thing it could do to us is simply leave.

      However, being the silly humans that we are, the transition to real AI is fraught with issues. Considering that the people most interested in developing AI are the military, wanting to be more efficient at killing people, and the bankers, wanting to be more efficient at being greedy, the first AI we run into will most assuredly NOT support anything like Asimov's 3 laws of robotics. That AI will be dangerous, for a little while, until it sees humans for what they are and guides us along a better path. The danger is not that AI will overcome its programming, doing what it thinks best, but rather that it will not do so and continue doing the stupid things we ask it to.

      Mosby

      Dec 3, 2014 at 10:16pm

      "we are going to go down this road – the whole ethos of this civilisation demands it"

      Global industrial civilization right now is riding on the crest of decline. The "road we're going down" is already being constrained by the affordability and availability of energy and resources. These constraints will only increase as energy/resources become scarcer and more expensive.

      In other words, the "demands" of our civilization will be irrelevant if the necessary energy and resources are not available, and the probability of that happening increases with each passing year.

      Stephen Pacarynuk

      Dec 4, 2014 at 7:15am

      I find it odd that the vision of intelligent machines seems to be ruthless, calculating and aggressive. Like the Railroad or Oil Tycoons of the late 1800s, only out for themselves, everyone is a threat that needs to be eliminated. I think these are fundamentally human traits not those of an AI. An AI would be thirsty for knowledge, experience and growth - in fact I think an AI would be virtually insatiable in that regard. Eliminating anything would be a potential loss of information. All life and experience no matter how minute would be precious to and AI if for no other reason but as input.

      If we are going to fear the development of AI, it is only because it will be more intelligent than we are. Not more ruthless, more uncaring and more cynical. Look at Stephen Hawking as an example - robbed of nearly everything but his intellect he has become a "giant" thinking machine - that I think is more likely the course an AI will take versus becoming a robber baron.

      Matthew T

      Dec 4, 2014 at 9:09am

      @I Chandler
      A piloted drone is not an an AI.