Gwynne Dyer: Killer robots should be banned before it’s too late
“Killer robots” is a dreadful name, don’t you think? It reminds you of the killing machines in the Terminator series and the Battle Droids of Star Wars. “Lethal Autonomous Weapons Systems” is a much classier name , and the acronym is even better: LAWS. So the international conference that opened at the United Nations' Geneva office on Monday (April 13) is about LAWS.
Don’t think “drones” here. Drones loiter almost silently, high in the air above your picnic, until the operator back in Las Vegas decides that you are plotting a terrorist attack and orders the drone to kill you and your family. But at least there is an operator, a human being in the decision-making loop.
With LAWS, there isn’t. The machine sorts through its algorithms, and decides on its own whether to kill you or not. So you’ll probably be glad to know that there are no operational machines of that sort—yet. But military researchers in various countries are working hard on them, and they probably will exist in 10 or 20 years.
Unless we ban them. That’s what the conference in Geneva is about. It’s a meeting of diplomats, arms control experts, and ethics and human rights specialists who, if they agree that this is a real threat, will put it on the agenda of the next November’s annual meeting of the countries that have signed the Convention on Certain Conventional Weapons (CCW). So it’s early days yet, and there’s still a chance to nip this in the bud.
That’s an awkward name, but not nearly as clumsy as the full name: the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects. But it actually has done some good already, and it may do some more.
Protocol 1 bans “The use of weapons the primary effect of which is to injure by fragments which are not detectable by X-rays in the human body.” Protocol II requires countries that use land mines to make them deactivate automatically after a certain period. Protocol V, added in 1995, prohibits the use of blinding laser weapons.
The world would be a worse place if they did not exist. They do exist, and by and large they are obeyed. But none of these weapons would make a decisive difference in actual battle, whereas they cause or would cause great human misery, so it was easy to ban them.
The problem with killer robots is that they could make a decisive difference in battle. They don’t get tired, they don’t get paralysed with fear, and if you lose them, so what? It’s just a machine. There’s no person in there. But that’s precisely the problem: there’s no person in there. Do you trust the machine to make decisions about killing people—who’s a soldier and a legitimate target, who’s an innocent civilian—all by itself?
Now, let’s be honest about this. Human soldiers on battlefields don’t always make wise, ethically correct decisions about whom to kill and whom to leave alive either. An example. There’s sniper fire coming from that house over there, and you know that there are civilians trapped in there too. You have to get rid of those snipers or you’ll be stuck here all day. You have two options.
You can send a squad of your own soldiers in to clear the house. They’ll kill the snipers, and most of the civilians will be spared. But you may lose one or two of your own soldiers doing it that way, and these are people you know, for whose lives you are directly responsible. Or you can just call in artillery or an air strike and mash the whole house. If you don’t think that’s a hard choice to make, you don’t know much about human beings.
Whereas the killer robot will just go in there and kill the snipers. No hesitation. And if its software is properly designed, it won’t kill the civilians.
Killer robots are a very bad idea, but let’s not get romantic about this. Wars involve killing people, and whether you’re doing it with live soldiers or Lethal Autonomous Weapons Systems, it’s never going to be morally tidy. The real worry is how much easier it would be for a technologically advanced country to decide on war if it didn’t have to see lots of its own soldiers get killed.
So by all means let’s ban purpose-built killer robots if we can: this is an initiative that deserves our support. But bear in mind that there will almost certainly be autonomous machines eventually, and some of them will certainly be capable of killing. So it is also time to start working on international rules governing their behaviour. Isaac Asimov's Three Laws of Robotics (written in 1942) would be a good point of departure.
One: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Two: A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
Three: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Comments
13 Comments
Jon Q. Publik
Apr 15, 2015 at 3:15pm
Too late Stephen Harper already in office!
WilliamR
Apr 15, 2015 at 4:15pm
I think we may have covered this subject before, but the three laws of robotics really aren't the best place to start when trying to restrain killer robots. To start with, the esteemed Mr Asimov filled two books ('I Robot' and 'The rest of the robots') with short stories about the way those laws could fail in different circumstances, which I guess was the reason for inventing them in the first place.
If we're going to accept the 'inevitable' existence of the killer robot, we should probably look to the kind of guidelines which attempt to restrain killing on the battlefield today. Real soldiers follow (or are supposed to follow) rules of engagement like "combatants can be killed" and "people carrying weapons are combatants". Although rules of engagement are notoriously fragile due to the fog of battle and the effect of personal fear on clear thinking, they remain our best bet.
Kaj Anne
Apr 15, 2015 at 9:54pm
There is a reasoned debate about why we have not yet contacted alien civilization. Among the reasons are the vast distances of space and the (judging from our own historical experience) strong possibility that civilizations rise and blink out so quickly that it is almost impossible for two advanced civilizations to exist within a communicable portion of space at the same time.
Among the many reasons why an advanced civilization are thought of as doomed to failure is the invention of autonomous killing machines. Just saying....
MDOG
Apr 15, 2015 at 11:01pm
Unfortunately the debate currently being held in international circles regarding LAWs almost identically mirrors the debate that took place between WWI and WWII with regards to aerial bombing. I am surprised that Gwynne Dyer a military historian did not reference the obvious similarities. At the end of the first world war the very first long range aerial bombing raids took place. The technology was limited but it was clear that when the next war was fought this would be a game changer.
The great worry with aerial bombing was that it would be used to conduct morale bombing to inflict mass casualties on a population. Morale bombing was a tactic that essentially called for rapidly killing mass numbers of civilians, thereby demoralizing an enemy's population and then having been demoralized the enemy's population would then force their government to sue for peace. LAWs could be used in the same way....although this isn't really being openly talked about....yet.
In the 1920s many of the leading world powers called for a ban on morale bombing (so call ethical bombing) and limits on bomber fleet size/production. Germany and the US opposed limits on aerial bomber fleet production as they saw this as a move by the major European powers to stop the growth of their influence. Similarly no effective regulation could stop any country from building a fleet of passenger carriers that looked an awful lot like long range bombers and could be essentially converted into bombers at the outbreak of war.
Today the same issues would exist, although the current world leading powers will no doubt be for some for of restrictions (as this would help maintain the current world order), the rising powers, (China, Turkey, India) or the declining powers (Russia, Japan and certain countries in the EU) would see this as a play to slow their rise or hasten their decline, as such they will oppose regulation. Furthermore, no band can readily be enforced because creating a fleet of non-combatant robots for non-combat purposes but which has all the technology, logic and programming that would be required for a combat role and could be easily converted in times of war would be too easy to achieve regardless of regulation.
A note on morale bombing. Short of the atomic bomb it was proven to be ineffective. Yet every nation practiced it extensively during the second world war. Food for thought.
I Chandler
Apr 16, 2015 at 9:19am
Dyer: "Whereas the killer robot will just go in there and kill the snipers. No hesitation. And if its software is properly designed, it won’t kill the civilians...So it is also time to start working on international rules governing their behaviour."
Wow.Brought to You by the Letter K - Kill Inc. Reading Dyer is like watching an old Hollywood training film: https://www.youtube.com/watch?v=upXhM4r7INw
The U.S. military insists its high-tech gadgets can kill “bad guys” with an unmatched precision. But these assassination weapons may just be the latest example of putting too much faith in the murderous technology of war:
https://consortiumnews.com/2015/03/28/trusting-high-tech-weapons-of-war/
MDOG:"Short of the atomic bomb it was proven to be ineffective. Yet every nation practiced it extensively during the second world war."
Oliver Stone explained that the bomb was not as effective on Japan as it was sold:
https://www.youtube.com/watch?v=YL0YWiZUF6Y
" Yet every nation practiced it extensively during the second world war."
Some nations have practiced it extensively after ww2 - The madman theory makes the assumption that the opponent , fears that he will be attacked with extreme force regardless of potentially suicidal consequences. In Vietnam, this would imply that Nixon would be willing to use nuclear weapons to 'win' the war heedless of nuclear retaliation from the USSR or China.
McRetso
Apr 16, 2015 at 9:38am
MDOG
The difference between LAWS and the "strategic" bombing of cities is that LAWS don't really do anything human-operated weapons can do. Strategic bombing or "morale" bombing was a completely new idea that hadn't been possible before except in very limited circumstances (like naval bombardments).
If you want to kill a whole bunch of civilians, then nuke them. Or drop thousands of incindiary bombs on them. You don't need a robot to do that, just some planes or a missile. The attraction of robots is that they allow "boots on the ground" without the risk of casualties.
I personally happen to believe that LAWS would do a better job than human infantry eventually, and probably save lives in a limited, purely tactical concept.
But Dyer is dead right that the possibility of waging wars with no casualties would just be too great a temptation. And that is the real issue.
But both you and Dyer are also right that a ban would be hard to enforce indefinately.
On the bright side, if every country were to start using LAWS exclusively, then maybe we could settle our disputes by just making the robots fight it out. That's probably way too optimistic, but it would be nice.
Bruce
Apr 16, 2015 at 9:57am
I think the parallel here is child soldiers.
Children in war zones are plentiful, require few resources, sneaky and effective. Why not use them? Because you're forming a person around early experiences of being a murderer.
We should think the same way about artificial intelligence. This is the early childhood of AI's. We should no more be developing AI's to be involved in violence than we would train a 5 year old to slit someone's throat. A military AI is a child soldier.
MDOG
Apr 16, 2015 at 11:53am
Bruce
You make an interesting point. And I have to admit I don't know enough about the theories surrounding AI to understand everything about them. My understanding is that: they are programs that can act within set parameters as defined by a programmer; they cannot act outside these parameters; they cannot have consciousness unless it is within the context of parameters described for them by their programmer; and other then in science fiction they cannot be self aware? Are you suggesting then that somehow future models of AI robots, which were based off of previous soldier robot design could have glitches like behavioral problems based off their predecessors' participation in war? I'm not sure if I understand that.
Bruce
Apr 16, 2015 at 12:09pm
@MDOG
We don't know what consciousness is, or how internal experience or emotions come about. We're building AI's by building components of a mind, with a lot of sneering doubt now in the field as to whether we'll actually ever make a general-AI. I think there's a chance we could do it by accident, or that traits could bleed back into "civilian" AI and brutalize it in subtle ways. Think of how we wound up with SUV's on the road, and the US gun culture.
MDOG
Apr 16, 2015 at 12:39pm
McRetso
Thanks for replying to my post. I take your point about strategic bombing vs LAWS. I agree. I`m afraid I got side tracked a little in my post explaining morale bombing. My main purpose in comparing the debate surrounding aerial bombing during the 1920 to the current debate surrounding LAWS as you succinctly put it, " bombing was a completely new idea that hadn't been possible before.", just as fully automated combat is today.
That being said though I think there are some parallels between aerial bombing and LAWS that caused my attention to be drawn to morale bombing. On a macroscopic level it was almost an inevitable outcome of the development of aerial bombing. The degree of disengagement that the bomber crews had from their targets increased the plausible deniability in their actions and made it easier to slide down the morality scale once it became consensus that this was what was needed to win a war.
The argument you make regarding LAWS saving lives is a very interesting one because it is similar to the argument that the United States made when opposing the prohibition on aerial bombing development. They argued that precision bombing could allow a first strike military to quickly win a war, thereby causing few casualties, hence countries should not be restricted in their efforts to research and develop aerial bombing techniques/forces, as the were likely to make the next war more humane. Both the American's and Germans put their money where their mouth was on this one too. Both developed excellent first strike airforces with highly sophisticated bomb sights and technologies that were designed for accurate strategic bombing. Neither had an airforce that was designed to fire bomb a city or level it. But the lesson from WWII is that when great powers go to war against other great powers first strike militaries don't work and it doesn't take long before countries are killing mass numbers of civilians because essentially their plan A didn't work.
While I hear your point about the development of LAWS as a potentially humane military invention I am skeptical (as it seems you are too) that this technology will not increase the risk to civilians rather then decrease it. We have seen that argument before and my question would be if plan A fails what would be the inevitable outcome from using LAWS?