Leading artificial intelligence and robotics experts on Tuesday issued an open letter arguing against the development of autonomous weapons.
Its publication coincides with the International Joint Conference on Artificial Intelligence — IJCAI 2015 — being held July 25 through 31 in Buenos Aires.
Many arguments have been advanced both for and against autonomous weapons, the letter says.
If any major military power pushes ahead with AI weapon development, a global arms race is “virtually inevitable” and autonomous weapons will “become the Kalashnikovs of tomorrow,” it warns.
“It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group,” the letter says.
The letter calls for a ban on “offensive autonomous weapons beyond meaningful human control.”
Who’s Afraid of the Big Bad AI?
Among the AI and robotics researchers signing the letter are Stuart Russell, director of the Center for Intelligent Systems at the University of California at Berkeley; Nils J. Nilsson, of Stanford University’s department of computer science; Barbara Grosz, the Higgins Professor of Natural Science at Harvard University; Tom Mitchell, head of the machine learning department at Carnegie Mellon University; Eric Horvitz, managing director of Microsoft Research; and Martha Pollack, provost at the University of Michigan, and a professor of computer science and information.
Other AI and robotics researchers who signed the letter include Demis Hassabis, CEO of Google DeepMind; Francesca Rossi, IJCAI president and co-chair of the AAAI committee on the impact of AI and Ethical Issues, as well as a professor of computer science at Harvard and Padova universities.
Prominent signatories from other fields include renowned physicist, cosmologist and Nobel prize winner Stephen Hawking and SpaceX and Tesla CEO Elon Musk, both of whom repeatedly have voiced concerns about AI developments; Apple cofounder Steve Wozniak; linguist, philosopher, cognitive scientist and logician Noam Chomsky; George Dvorsky, director of the board of the Institute for Ethics and Emerging Technologies; and Bernhard Petermeier, council manager of the World Economic Forum Global Agenda Council on AI and Robotics 2014-2016.
Why a Killer Robot?
The United Nations has spoken out against autonomous weapons, but the Heritage Foundation, a conservative U.S. organization, has argued that the U.S. should oppose a UN ban.
While some people object to lethal autonomous weapons systems, or LAWS, on ethical grounds, “I’m not a big fan of ethics as a framing of the issues,” said Mark Gubrud, an adjunct assistant professor in the Peace, War, and Defense curriculum at the University of North Carolina at Chapel Hill. “All weapons that humans use to fight and kill each other represent the failure of ethics.” [*Correction – Aug. 3, 2015]
When people want to discuss the ethics of LAWS, they are really “trying to find ways of saying it could be OK to develop and use such weapons,” Gubrud told TechNewsWorld.
“Governments and corporations are generally not constrained by ethics,” he observed. “They just hire ethicists to work out ethical rules that allow them to do what they want to do.”
Gubrud favors a ban on LAWS.
The Case for LAWS
Here’s the problem with a ban on LAWS: Several countries — up to 40, by the Heritage Foundation’s count — have robotics weapons.
It isn’t too difficult to modify them to be autonomous, by simply replacing the human interface with a system that takes the information and automates the response to it, noted Rob Enderle, principal analyst at the Enderle Group.
“The only good defense to an autonomous weapon is another autonomous weapon, so if the U.S. exits this area, it goes from predator to prey pretty quickly,” he told TechNewsWorld.
Further, a global ban will be even more difficult to enforce than the ban on nuclear weapons, Enderle said, “because the base technology is becoming very prevalent. The same level of technology that makes cars self-driving can relatively easily be applied to weapons.”
*ECT News Network editor’s note – Aug. 3, 2015: Our original published version of this column identified Mark Gubrud as owner of the ICRAC website. ICRAC Chairman Noel Sharkey informed TechNewsWorld that the site is owned by Peter Asaro, who is vice chairman of the organization. He said further that Gubrud was not associated with ICRAC in any way, but that “he used to be a member before being expelled.” In response, Gubrud told TechNewsWorld that he was the owner of the website until recently transferring it to Asaro, and that he no longer claims membership in ICRAC due to an internal conflict. He said further that “ICRAC did not have any legitimately constituted process for expelling a member.”
*Update – Aug. 3, 2015: In fact, Gubrud had told TechNewsWorld on July 30 that he was no longer the owner of the ICRAC website, but we neglected to correct his status at that time.