November 2015 witnessed a multilateral discussion at the United Nations (UN) on the issue of autonomous killer robots, unmanned weapon systems that can independently select and engage targets without any humans in the loop. Several countries held extensive discussions in Geneva on the ramifications of this self-operating weapon system and deliberated over the proposal of banning them. Despite extensive global debates on nonproliferation, there is relative ignorance on the implications of robotic, unmanned weapons amongst the general public.
Increased exploitation of force multipliers like Unmanned Aerial Vehicles (UAVs) and robotics for surveillance, target acquisition, and intelligence collection has gained a boost with the reduced frequency of conventional wars and the advent of sub-conventional warfare. It is estimated that the UAV market will reach approximately $114.7 billion by 2023. Military rationale for employing armed UAVs is that it leads to better precision and minimizes collateral damage. However, use of armed drones in Af-Pak, Yemen, and other regions has uncovered a darker side. Commenting on the American drone operations, P.W. Singer stated that this technology has caused the “short-circuiting of the decision-making process” resulting in dangerous precedents. He noted that the drone operations (which were conducted without detailed debates in U.S. Congress) were not carried out by the Air Force as presumed by many, but rather were carried out by the CIA. There is a fear that this trend, if allowed, will adversely impact military strategies and policies of democracies in the coming decades.
What is most disquieting is that a new automated technology would obviate the need for employing a real-time human operator. As of today, all robotic and unmanned weapons in operation belong to the category of human-in-the-loop. This implies that they are subject to human input and intervention during various stages of operation. In the future, machines will be artificially intelligent, fully-automated with the technical capability of selecting targets, and engage without any human involvement. Looking at the dismal record of collateral damage by human-in-the-loop UAVs, one would be hard pressed to imagine the dangers of fully automated (without human-in-the-loop) aerial, land, and sea-based weapon platforms.
States are directing their research efforts towards significantly increasing automation of such systems, albeit without overt declaration. While the United States, the United Kingdom, and Israel are champions of almost-autonomous weapon technology, many more countries are following their footsteps. The United States plans to rely heavily on highly autonomous UAVs for future military applications, whether land, air, or underwater-based. China is rivaling American thinking by working on the development of similar capabilities. It also has expressed an interest in becoming a major exporter of these technologies. Following suit, India and Pakistan have also been amassing various types of UAVs that match their military doctrines and security requirements. These upcoming trends will inevitably result in greater access to such technologies for various state and non-state actors.
The idea of progressing towards fully-automated robotic weapons has initiated extensive debate on technical, moral, and legal grounds. Despite monumental progress in the field of artificial intelligence, experts such as Noel Sharkey assert that no autonomous robot can possess human-like judgment and ability to distinguish an enemy from innocent civilians today or in the coming future. Needless to say, it should be mandatory for a human operator to approve each strike and target before the robot unleashes its lethality. The issue of assigning accountability to a strike gone wrong is another dilemma, which will haunt future generations if such killer robots become a reality.
In addition to concerns about quantitative proliferation of these weapon systems, there are possibilities of non-state actors, terrorist groups, and criminal cartels getting their hands on technologies related to the development of such platforms, with or without assistance from another state. What gives credence to such arguments are reports of Hezbollah using Iranian-made UAVs over Israeli airspace. Concerted efforts by non-state actors and easy access to technology would only make such situations more probable than not. As it’s been discussed at the UN, the best way to deal with this issue in the long-term would be to impose a complete ban on such weapons. In the short-term, attention should be placed on pressuring producers of such technologies to adopt a degree of accountability and transparency regarding their current activities until the technology is effectively banned.
Effective global governance of such lethal technologies can best be ascertained by revisiting landmark arms control and export-control regimes, such as the Missile Technology Control Regime. Treaties such as Strategic Arms Reduction Treaty (START), New START, Intermediate Range Nuclear Forces Treaty, and Outer Space Treaty were signed during times when such technologies were nonexistent. Notably, producers of new autonomous robotic weapons, even nuclear-armed, can take advantage of the inherent loopholes resulting in unrestricted production and transfer to other countries. It is important to point out that the size or range of robotic weapons can be easily decreased to escape the prescribed limitations by export control regimes.
More importantly, it is essential to get non-participatory countries (those that remain outside the purview of the mentioned treaties) on board to discuss the potential menace of killer robots and not to alienate them from discussions on the subject. Now is the time to consider the potential dangers of robotic killers and address the challenge before it assumes menacing proportions and significantly threatens world peace.
Image: Jim Sher, Flickr