Are killer robots imminent?
In TREC (The Terrestrial Robotics Engineering & Controls Lab) at Virginia Tech., researchers work on humanoid robots. One of these most famous robots is THOR, the world’s first fire-fighting human robot that watches you from its spinning laser range camera atop its head. Robotic engineers call artificial intelligence like THOR “rescue robots”. THOR is supposed to haul a person from a burning building two minutes flat, but as I watched, the 5’10”, 140-pound robot took two limbering steps forward, then stumbled and tripped. Were it not for the stabilization strap that hooked it to the wall, THOR may have snapped into dozens of pieces.
Jodi Williams, Chair of the Nobel Women’s Initiative, won the Alfred Nobel Peace prize in 1997 for her work in banning anti personnel landmines. According to Jodi, these disaster response robots are the next step to killer robots, namely devices programmed to autonomously kill or maim people. In 2013, Jodi and her Campaign to Stop Killer Robots rented a robot, called David Wreckham, to circle in front of the Frontline Club in London, and to intone “damn killer robots”.
Seeing these highly dependent robots in Virginia Tech, it’s hard for me to see campaigns like Jodi’s as anything less than considerable brouhaha. Yes, millionaire entrepreneur Elon Musk warned that advanced products of artificial intelligence “could live forever… take over the world… And then you’d have an immortal dictator from which we can never escape.”
And, yes, Stephen Hawking advised that artificial intelligence could destroy civilization and could be the worst thing that ever happened to humanity. But really these robots that I saw seem no different than the programmable lego robots that my adolescent son builds. They’re vastly more complex, that’s true. But how could these fragile, highly dependent “Tin Men” (think of the Wizard of Oz) kill humans?
In 2015, Brian A. Anderson of Motherboard gained exclusive access to the Defense Advanced Research Projects Agency (DARPA) humanoid robots. Looking at these six-feet tall robots, he asked David Conner, then Senior Research Scientist of TORC Robotics: “They could hurt me, right?”
The assistant professor of Computer Science at Newport University hesitated for just a moment. Then, he told him, “You drove here in a machine that’s more dangerous than this.”
Isn’t it all in the name?
The name “killer robot” was created by the Human Rights Watch in its 2012 report against lethal autonomous weapons systems – weapons that can make lethal decisions without human involvement. Later, Mary Wareham, coordinator of the Campaign to Stop Killer Robots, told The Atlantic: “We put killer robots in the title of our report to be provocative and get attention.”
Wareham admitted, “It’s shameless campaigning and advocacy, but we’re trying to be really focused on what the real-life problems are, and killer robots seemed to be a good way to begin the dialogue.”
Ironically, such PR shenanigans are no different than those played by advocates to hype or soften the damage their weapons can do. The United States calls its missile that could carry up to 3,000 kilotons of warheads, “Peacekeeper”. Israel Aerospace Industries makes a missile named Gabriel, named for the angel. A 2006 Israeli mission to bomb South Lebanon was named Mivtza Sachar Holem, “Operation Just Reward.” When the United States invaded Iraq, they called the program “Operation Iraqi Freedom”. After all, it’s semantics that effect opinions. The term “killer robots” sounds far more terrifying than “lethal autonomous weapons”. Imaginatively, it transforms programmed miniature tanks that open truck doors, diffuse mines, and hand soldiers their cellular phones into something that blasts your heads off.
That fear took off.
Last summer, Elon Musk and 105 other signatories – including Google’s artificial-intelligence guru Mustafa Suleyman – wrote an open letter to the United Nations to ban killer robots.
“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s Box is opened, it will be hard to close.”
“KIller Robots are Good!”
Over in Georgetown, Washington D.C., Christine Fair, a military affairs expert and associate professor in the Security Studies Program of the Edmund A. Walsh School of Foreign Service, Georgetown University, insisted that our only option was to use lethal autonomous weapons in modern warfare.
“Drones,” she said citing these as an example, “are the only things that are taking out the terrorists. Our options are to do nothing or to come in with fire and missile from an F16 or from the AH-1W Super Cobras F6130 [helicopters], where you take out cities, villages, wedding parties.”
“Why are drones worse,” she asked, “than other weapon systems?”
The Bomb and the “Killer Robot”
Killer robots are no different than the bomb.
In 1945, the internationally-minded scientists who had worked around the clock on nuclear power wanted to use their discovery in the right way – to bring a definite and immediate end to a debilitating war. Their “Franck Report” argued for a demonstration of the atomic bomb’s power “before the eyes of representatives of all the United Nations, on the desert or a barren island” to persuade Japan to capitulate. The scientists petitioned a mock display rather than the real thing. They had no idea the U.S. leadership had other plans – to use it on Hiroshima and Nagasaki, instead. The rest is history.
It is scientists who make the technology, and purchasers that use them. It is scientists who program the robots. And it is governments that use them. For good; for bad.
Scientists may have the best of intentions. In the hands of an authoritarian, corrupt, or misinformed government such intentions can go woefully wrong.
“Rescue robots” can turn into “killer robots”.
Artificial intelligence is not inherently good or bad.
At the end of the day, robots serve their masters.