In an open letter released in November 2017, 116 of the world’s leading robotics and AI specialists from 26 countries called on the United Nations to take action against the weaponization of artificial intelligence.
The open letter was issued following an announcement by the UN that a newly-formed group of governmental experts led by Ambassador Amandeep Singh Gill of India was set to hold its first meeting to discuss the issue of lethal autonomous weapon systems. The letter was published on the day the meeting was scheduled to take place but only after the meeting had been postponed until later in the month. The 116 signatories of the letter warned the UN that anew “arms race” threatened to usher in a “third revolution in warfare” (after gunpowder and nuclear arms). “Once developed,” the letter said, “lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
An Open Letter to the United Nations Convention on Certain Conventional Weapons
As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm.
We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations.
We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.
We regret that the GGE’s first meeting, which was due to start today, has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November.
Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.
We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.
Lethal autonomous weapons systems or “fully autonomous weapons”have been characterized as weapons systems that, once activated, can select and fire on targets without “meaningful human control”. The concern is that low-cost sensors and rapid advances in artificial intelligence are making it increasingly possible to design weapons systems that can target and attack without further human intervention. If this trend towards autonomy continues, the fear is that the role of humans in the decision-making loop will start to decrease, first retaining only a limited oversight role, and then no role at all. The United States and some other governments have said that lethal autonomous weapon systems “do not exist” and do not encompass remotely piloted drones, precision-guided munitions, or defensive systems. The reason they do not exist, it is claimed, is that most existing weapons systems are overseen in real-time by a human operator and tend to be highly constrained in the tasks they are used for, the types of targets they attack, and the circumstances in which they are used.
While the idea of “killer robots” conjures images from Hollywood movies such as the Terminator’s T-800 or Robocop’s ED-209, it is a fact that lethal autonomous weapons systems are already in development. Samsung’s SGR-A1 fixed-place sentry gun, developed on behalf of the South Korean government for use along the South Korean border of the 2.5m-wide demilitarized zone with North Korea, was the first of its kind with an autonomous system capable of performing surveillance, voice-recognition, tracking and firing with mounted machine gun or grenade launcher. The United Kingdom’s Taranis drone in development by BAE Systems, is intended to be capable of carrying air-to-air and air-to-ground ordnance intercontinentally and incorporating full autonomy. The unmanned combat aerial vehicle, about the size of a BAE Hawk, the plane used by the Red Arrows, had its first test flight in 2013 and is expected to be operational after 2030 as part of the British Royal Air Force’s “Future Offensive Air System” to replace the human-piloted Tornado GR4 warplanes. Russia, the United States and other countries are currently developing robotic tanks that can either be remote controlled or operate autonomously. The US’s autonomous warship, the “Sea Hunter” built by Vigor Industrial, was launched in 2016 and, while still in development, is intended to have offensive capabilities including anti-submarine ordnance. Under the water, Boeing’s autonomous submarine systems built on the Echo Voyager platform are also being considered for long-range deep-sea military use.
Tesla and SpaceX CEO, Elon Musk, one of the signatories of the open letter to the UN , has been warning for some years about the dangers of artificial intelligence. In a 2014 interview at the MIT Aero Astro Centennial Symposium, when asked about his thoughts on AI, Musk said “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So, we need to be very careful with artificial intelligence. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish”. Musk went on to liken unregulated development of artificial intelligence to“summoning the demon”. “You know those stories where there’s the guy with the pentagram and the holy water, and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out!”. A year later, Musk joined 1,000 other experts including scientist Stephen Hawking and Apple co-founder Steve Wozniak, in signing a letter warning of the dangers of autonomous weapons.
In 2017, speaking in Providence, Rhode Island at the US National Governors Association summer meeting, Musk stressed the need for preemptive regulation on AI. “Normally the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry,” Musk said. “It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization.” Musk’s comments spurred Facebook CEO, Mark Zuckerberg, who did not sign the letter to the UN, to accuse Musk of drumming up what he called “doomsday scenarios.” “I think people who are naysayers and try to drum up these doomsday scenarios – I just, I don’t understand it,” Zuckerberg said. “It’s really negative and, in some ways, I actually think it is pretty irresponsible. In the next five to ten years, AI is going to deliver so many improvements in the quality of our lives.” Musk responded by calling the Facebook co-founder’s understanding of the subject “limited.”
The experts debate
The 1980 Convention on Conventional Weapons (CCW) is an international treaty that prohibits or restricts certain weapons considered to cause unnecessary or unjustifiable suffering. Its 1995 protocol banning “blinding lasers” is an example of a weapon being preemptively banned before it was acquired or used. A total of 125 nations are “high contracting” or state parties to the CCW, including all five permanent members of the UN Security Council. Between 2014 and 2016, the CCW held three informal meetings of experts, each approximately one week long, to discuss lethal autonomous weapons systems. At a meeting in April 2016, participating states agreed for the first time on recommendations for future action, proposing the establishment of a group of governmental experts to move deliberations to the next level.
Representatives from more than 70 states attended the first meeting of the group of governmental experts in Geneva on 13-17 November 2017. The meeting focused on three main areas: dimensions of emerging technologies; military effects; and legal and ethical dimensions. Participating states had the opportunity to express their views on calls for bans and other ways to address concerns over lethal autonomous weapons systems.
A second meeting was held in Geneva from 9 to 13 April 2018 and focused on four principal areas. The first agenda item was a discussion about ways of “reaching a common understanding on concepts and characteristics relevant to the objectives and purposes of the CWW.” Under the second agenda item, delegations considered “the human element in the use of lethal force and aspects of human-machine interaction in the development, deployment and use of emerging technologies in the area of lethal autonomous weapons systems.”The third agenda item brought together five independent experts to answer questions about the potential military applications of artificial intelligence and related technologies; touch points in the human-machine interface;and whether technology itself placed certain limits on human control, supervision and judgment The fourth and final session addressed “humanitarian and international security challenges posed by emerging technologies in the area of lethal autonomous weapons systems”Among the security related challenges under discussion were the possibility of a new arms race, the danger of proliferation to non-state actors, a lowering of the threshold for the use of force, and cyber-attacks or hacking of network-centric weapon systems. Humanitarian challenges included compliance with international humanitarian law, possible gaps in legal and political responsibility and accountability and ethical and moral questions raised by increased machine autonomy in the use of lethal force.
Formed by non-governmental organizations at a meeting in New York in October 2012 and launched in London in April 2013, the Campaign to Stop Killer Robots is an international coalition working to preemptively ban fully autonomous weapons. According to the campaign website, there is now a global movement to “retain meaningful human control over the use of force” The growing list of countries calling for a ban on fully autonomous weapons now stands at 26. In April 2018, an African group of countries stated that fully autonomous weapons not under human control should be banned while China, for the first time, expressed its desire to negotiate and conclude a new CCW protocol to prohibit the use (but not the development) of fully autonomous lethal weapons systems. Five countries -France, Israel, Russia, United Kingdom, and United States – have explicitly rejected the idea of negotiating a new international law on fully autonomous weapons: That said, most if not all countries agree on the need to retain human control over weapons systems and the use of force. The principle of human control, be it “meaningful” or “appropriate” or “necessary” or “significant” or some other formulation, finally appears to be firmly at the heart of the debate about autonomous lethal weapons systems.