Back to Top

May 2018 Online Exclusive Article

A Four-Phase Approach to Developing Ethically Permissible Autonomous Weapons

Maj. Zachary L. Morris, U.S. Army

Article published on: 29 May 2018

Download the PDF PDF Download

Pvt. 1st Class Brandon Norton, an M1 Abrams crewmember with 1st Battalion, 63rd Armor Regiment, 2nd Armored Brigade Combat Team, 1st Infantry Division, launches a Lethal Miniature Aerial Missile System (LMAMS) for aerial support 6 April 2018 during a Robotic Complex Breach Concept assessment and demonstration at Grafenwoehr, Germany

In July 2015, Elon Musk, Stephen Hawking, Steve Wozniak, and over a thousand other scientists and technology leaders signed an open letter calling for a global ban on autonomous weapons.1 Further, Human Rights Watch and the international campaign to “stop killer robots” have called for a ban on autonomous weapons.2 These groups claim autonomous weapons threaten the laws of war and create significant ethical and moral dilemmas.

However, multiple drivers propel the U.S. military toward employing autonomous weapons. First, in the 2001 National Defense Authorization Act, Congress mandated objectives that “by 2010, one-third of the aircraft in the operational deep strike force aircraft fleet are unmanned”; and “by 2015, one-third of the operational ground combat vehicles are unmanned.”3 These unmanned systems require increasingly autonomous capabilities due to potential adversaries’ sophisticated electronic warfare capabilities and the contested environments in which unmanned systems will operate. Second, potential adversaries, such as Russia and China, are investing heavily in autonomous weapons. In 2015, Deputy Secretary of Defense Robert Work said, “we know that China is already investing heavily in robotics and autonomy and the Russian Chief of General Staff Gerasimov recently said that the Russian military is preparing to fight on a roboticized battlefield.”4

Third, autonomous weapon technology already exists. South Korea currently defends the Demilitarized Zone using the Super aEgis II, an automated gun turret.5 At present, a human operator must approve targets; however, Jungsuk Park, a senior research engineer, stated, “Our original version had an auto-firing system … but all of our customers asked for safeguards to be implemented. Technologically [autonomy] wasn’t a problem for us.”6 Finally, Ronald Arkin, director of the Mobile Robot Laboratory at the Georgia Institute of Technology, notes many other potential benefits of autonomous weapons. Some examples include the ability to act conservatively because robots do not need to protect themselves; robotic sensors that are better equipped than humans to distinguish targets on complex battlefields; reduced influence of human limitations such as blinding emotion or fatigue; improved ability to quickly integrate information from numerous sources; and improved ethical monitoring, reporting, and behavior.7 These drivers create substantial pressure on the United States to develop and deploy increasingly sophisticated autonomous vehicles and weapons.

The U.S. military should implement a four-phase development process to field autonomous weapons, enabling ethical and responsible development of capabilities and operational concepts. The phased development of autonomous weapons should clearly demonstrate “due care” for potential civilians on the battlefield and the military’s intent to minimize the loss of innocent life based on just war theory principles and American values. The phases outlined in this article create a responsible “crawl-walk-run” progression of capabilities that emphasizes discrimination of potential civilians on the battlefield and clarifies many potential issues of responsibility during employment. An adequately phased approach should provide the necessary steps to appropriately integrate advancing technology and employment concepts, protect innocent life, and develop understanding and confidence within the military and American society.

The article that follows is in four parts. The first section explains the four phases of autonomous weapons development. The second section examines some ethical challenges often attributed to autonomous weapons. The third section discusses the critical issue of discrimination related to autonomous weapons. The final section explains the issues related to responsibility and accountability created by autonomous weapons.

The Four Phases of Development

A four-phase development process for autonomous weapons ensures adequate application of the jus in bello principle of discrimination and clarifies responsibility during employment. The phases define the wide spectrum of autonomy regarding authority granted autonomous weapons based on discrimination and responsibility. Each phase should provide an incremental step facilitating experimentation and testing of improved technology, development of operational concepts and experience, and refinement of operational control and responsibility. These phases, and any subphases required to enable better transitions, serve as a broad outline for the advancement of autonomous weapons within the concepts of just war theory.

The U.S. military is currently in the first phase of autonomous weapons development. This phase maximizes target discrimination and responsibility by limiting autonomous weapons to those capable of targeting weapons, projectiles, or other autonomous systems.8 The current Department of Defense directive states that “human-supervised autonomous weapon systems may be used to select and engage targets, with the exception of selecting humans as targets,” in defense of a static position or “onboard defense of manned platforms.”9 The Department of Defense policy limits autonomous weapons by engaging “materiel targets” only.10 The policy essentially approves already employed weapons systems such as the Aegis Combat System on manned cruisers and destroyers (which is designed to defend against incoming high-speed projectiles and missiles).11 Thus, autonomous weapons remain completely within the control of military personnel and limit violations of discrimination.

Phase two would begin granting autonomous weapons the authority to engage human targets in limited situations complying with discrimination and clear responsibility; commanders would have to arm the autonomous weapon and control engagements based on target type, time period, geographic area, and rules of engagement setting (i.e., hold, tight, or free). For example, an infantry company conducting an attack, employing phase two autonomous weapons, could control engagements in several ways. As depicted in the figure, the commander could limit autonomous engagements within a specific timeframe and geographic area of operations, and restrict targeting to enemy military ground vehicles only (i.e., tanks and other armored vehicles). Further, if the unit confirms no civilians present within designated kill boxes such as the ones in the figure, the commander could authorize autonomous engagement of all human targets within each geographic kill box for a short period of time. The unit could confirm civilian absence in a target area using various methods—including drones, reconnaissance assets, or any other means—similar to clearing areas for artillery fire. While no measures are guaranteed to protect all civilians, using autonomous engagement controls in this manner demonstrates “due care” for civilians, and significantly reduces the risks of mistaken autonomous engagements.

morris-figure-1

In this phase, autonomous engagements should emphasize clear commander responsibility for striking onlyu unmistakably identifiable militarily hostile targets. The primary way to achieve target clarity based on current technology means restricting autonomous systems to targeting military vehicles such as armored vehicles and combat aircraft. Many of these potential systems are already heavily reliant on technology for targeting information and identification. For example, the Army could employ autonomous weapons for air defense, anti-armor, artillery, and other vehicle–or target-specific (such as grid location) requirements. With recent developments in technology such as radar, thermal and visual shape recognition, and other sensors, autonomous weapons could likely identify and target enemy vehicles adequately to meet or surpass human discrimination requirements now. In this phase, autonomous weapons should remain heavily constrained with regard to attacking individual humans due to current technological limitations and the complexity of distinguishing types of human targets. However, by using a free-fire area within a geographic kill box or sector of fire, commanders could enable engaging human targets in a tightly limited time and an area in which only hostile military targets are present. These limitations and constraints for phase two are probably achievable now, or in the near future, and ensure discrimination and clear responsibility for autonomous weapons.

In phases three and four, autonomous weapons would gain increasing capability and autonomy. Phase three would mirror phase two in many ways; however, autonomous systems would have improved discrimination capabilities allowing greater autonomy over time and space. Improved discrimination—based on facial and behavioral recognition, and uniform or other identification—would allow better discrimination between friend and foe, surrendered enemy, wounded enemy, and noncombatants in mixed target environments. Improved discrimination, combined with enhanced fire control measures such as changing sectors of fire, direct fire commands, rapidly shifting rules of engagement, and targeting priorities, would allow more detailed and refined engagement areas. These improvements would enable ground commanders to responsibly expand the time and geographic limits in which autonomous weapons operate, including wider parameters to engage individual human targets. Thus, in phase three, autonomous weapons could operate in environments with friendly forces maneuvering in front of and through engagement areas, and in areas with noncombatants mixed with combatants.

In phase four, autonomous weapons would become predominantly independent. Autonomous weapons would retain independent engagement authority and self-defense priorities in almost all situations; however, human commanders would retain a “kill switch” ability to shut off the systems and would provide rules of engagement and other parameters as the mission required. Thus, by phase four, autonomous weapons would form a critical part of human-robotic combat teams and, if developed correctly, would dramatically increase the combat power of U.S. formations while ensuring adequate standards of discrimination and responsibility.

The military should conduct extensive testing, experimentation, iterative learning, and certification before transitioning between each phase of autonomous weapons development. While these measures are important for phase two, rigorous standards are vital for phases three and four. For the military to successfully field these units as a system, each phase should ensure the simultaneous parallel development of material and technological capabilities, operational concepts, training programs, and leader development. Utilizing a panel, or committee, of external security and technology experts could increase the credibility of conclusions, recommendations, and actions that result from each phase of development. Further, each unit should undergo so rigorous a test, utilizing a Combat Training Center rotation concept, that each unit and phase is certified before integrating into the military force structure. These measures could substantially improve the performance of autonomous weapons and improve confidence in autonomous weapons development within both the military and American society.

Ethical Challenges for Autonomous Weapons

Emerging military technologies often create unease and many exaggerated claims both in favor of, and against, the new technology.12 Many of the ethical arguments against autonomous weapons are problems for all new military technology, or human psychology and decision-making. Technology-based arguments against autonomous weapons include fears of malfunctioning or wild robots, capturing or hacking autonomous weapons, and proliferating autonomous weapons technology.13 However, these concerns apply to almost all sophisticated military technology. Any advanced networked weapon can malfunction, miss intended targets, get hacked, break, or proliferate. Refusing to develop autonomous weapons does not prevent these risks, but instead it creates other more dangerous risks such as unpreparedness when adversaries field autonomous weapons. Even if the United States ceased developing military technology, these risks would remain a challenge. As a result, they are best addressed in development and refinement, not by failing to develop and understand potential technologies.

The military should conduct extensive testing, experimentation, iterative learning, and certification before transitioning between each phase of autonomous weapons development.

Human decision-making and psychological concerns influenced by having advanced technology do not absolve human responsibility; errors could occur no matter what advanced technology the United States possesses. Some of these arguments emphasize that autonomous weapons lower the threshold for war, create the possibility of unilateral risk-free warfare, negatively affect military cohesion, place advanced technology in the hands of immoral or irresponsible junior leaders, create difficulties for a postconflict peace settlement, and prevent winning the hearts and minds of an adversary’s population.14 However, all of these problems are human challenges and can arise from any advanced military technology. Immoral people will commit immoral acts regardless of the technology available. Further, strategic decision-makers will still have to make difficult and complex decisions taking into account the effects of available technology. The United States regularly develops weapons technologies that are rarely employed due to strategic calculations, such as nuclear weapons and mines. Possessing autonomous weapons does not require the United States to employ these systems in all conflicts. Developing advanced weapons simply ensures American preparedness, and could result in new concepts and capabilities to counter autonomous weapons if adversaries field autonomous weapons. Thus, the United States should not resist the development of autonomous weapons based on these grounds.

The primary ethical challenges specifically for employing autonomous weapons are target discrimination and responsibility in combat.15

Discrimination

The principle of discrimination, according to just war theory, includes two parts: “once war has begun, soldiers are subject to attack at any time (unless they are wounded or captured),” and “noncombatants cannot be attacked at any time.”16 Discrimination draws “a line between those who have lost their rights [to life] because of their warlike activities and those who have not.”17 Thus, “anyone or anything engaged in harming” becomes a legitimate target in war and surrenders their right to life.18 Discrimination seeks to protect noncombatants because they have not surrendered their right to life by wartime activity. Soldiers must take due care not to harm civilians and “recognize their rights as best we can within the context of war.”19

The moral doctrine of double effect provides the one exception to discrimination and permits performing an act likely to have evil consequences—killing noncombatants—provided four conditions are met.20 First, the action constitutes a legitimate act of war; second, the direct effect is morally acceptable; third, the actor has good intentions and aims narrowly at the acceptable effects, and the actor reduces the foreseeable evil effect as much as possible, even by accepting costs on himself; and fourth, the good effects are good enough to compensate for the evil effects.21 The third condition, known as double intention, implies that not intending the death of civilians is insufficient; the military must display positive commitment and action to save civilian lives. The risks that the United States must accept to protect civilian lives in war “are fixed roughly at that point where any further risk-taking would almost certainly doom the military venture or make it so costly that it could not be repeated.”22 Thus, the U.S. military must demonstrate the just war theory concept of due care. Due care requires positive intention and action to prevent the potential errors that will occur fielding operational autonomous weapons systems but not necessarily prevent every error.

Advancing technology will improve autonomous systems’ discrimination capabilities and potentially outclass humans in most areas other than the most complex moral and legal situations.

Autonomous systems likely cannot discern the moral weight of taking an innocent life or display positive intention under the doctrine of double effect within the near future. But, autonomous weapons could be programmed to discriminate at least as well as the worst moral soldier. Phased development is a powerful way to ensure the United States develops this discrimination capability.

The phased experimental development of autonomous weapons proposed here clearly demonstrates due care and clarifies discrimination by initially targeting obviously identifiable military targets before gradually transitioning to more complex target environments as technology improves. Phased implementation of autonomous weapons would facilitate a gradual increase in authority and decision-making, enabling adequate improvement of technology and operational concepts to protect noncombatants. Even under ambiguous laws of war and rules of engagement, phases two and three would allow sufficient human commander control to ensure target discrimination. Further, autonomous-human combat teams could likely discriminate targets better than purely human units. Many aspects of discrimination are merely technical problems, and some autonomous systems (e.g., the Super aEgis II and the Aegis Combat System) can already perform these functions as well as, or better than, many humans within the constraints of phased implementation.

Advancing technology will improve autonomous systems’ discrimination capabilities and potentially outclass humans in most areas other than the most complex moral and legal situations.23 Technology like radar, facial recognition, body-language understanding, voice recognition, thermal imaging, and vehicle or shape recognition points toward improved autonomous systems more than capable of discrimination in a host of combat situations. The proper goal for the military should be to develop autonomous weapons that display due care for civilians in potential future conflict zones where autonomous combat systems are deployed. The U.S. military does not need to wait for autonomous systems capable of perfect discrimination because errors in warfare are impossible to completely prevent; the systems need only demonstrate due care from the military and meet basic standards of discrimination in specific combat scenarios.

Responsibility and Accountability

Moral responsibility and accountability is a complex and difficult concept that permeates all aspects of just war theory. While moral responsibility for various parts of just war theory and different actions can vary between elected officials, democratic citizens, and soldiers, here the focus is on responsibility while fighting with autonomous weapons. Without autonomous weapons, soldiers are responsible for their conduct and actions, at least within the limited sphere of their activity.24

However, autonomous weapons create significant challenges assigning responsibility for errors of discrimination within jus in bello. For example, should the designer, manufacturer, procurement officer, national leader, controller, supervisor, or field commander bare responsibility? The more autonomous the weapon system, the more complex the answer becomes.25 Because of the responsibility issue, the United States must decide who, or what, makes the decision for an autonomous weapon to strike.26 The phased implementation of autonomous systems provides some solutions, though imperfect, to the issue of responsibility.

Phased implementation of autonomous weapons, at least until phase four, allows the military to discern responsibility for errors between either human commanders or technical issues. Commanders already assume significant responsibility in the conduct of war: “first, in planning and executing their campaigns, they must take positive steps to limit even unintended civilian deaths” and abide by the doctrine of double effect; and “second, military commanders, in organizing their forces, must take positive steps to enforce the war convention and hold the men under their command to its standard.”27 In phases two and three, ground commanders would assume responsibility for the employment of autonomous weapons unless significant technical errors occurred absolving them of some responsibility. Because commanders would control the autonomous weapons by arming them and setting rules of engagement, time period, target type, and geographic area restrictions, errors based on these inputs would remain the commander’s responsibility. Technical errors that resulted in unnecessary loss of life would reduce the commander’s responsibility some but not absolve all of it. Technical errors would shift some responsibility onto those responsible for the development or maintenance of specific systems.

Phased implementation of autonomous weapons, at least until phase four, allows the military to discern responsibility for errors between either human commanders or technical issues.

However, the United States should recognize that perfect weapons are an unachievable goal and mistakes or errors will occur. As long as autonomous weapons equal or surpass the probable performance of humans, the errors are likely acceptable. Even fully human-controlled systems and weapons cause significant unintended loss of life, such as during air strikes in Iraq and Afghanistan. Further, during the Persian Gulf War, some estimates claim over 50 percent of American casualties were due to friendly fire.28 Thus, avoiding the introduction of autonomous weapons to prevent already present risks appears specious. The military should develop autonomous weapons while mitigating the risks as best as possible using experimentation and a phased development and implementation.

Phase four creates more difficulties as autonomous weapons gain near full autonomy. Assigning responsibility in this phase appears significantly more difficult, and a clear solution does not currently exist. However, the military could develop new and innovative solutions as technology and operational concepts improve. Further, as technology improves, our understanding of responsibility will likely evolve. Finally, because phase four likely remains years away from implementation, time exists to develop an adequate solution.

Conclusion

A four-phase development and implementation process provides the U.S. military with an ethical and responsible method to gradually employ autonomous weapons. Each phase demonstrates an incremental step forward based on advancing technology, operational concepts, and experience while emphasizing discrimination and responsibility. These incremental steps adequately demonstrate due care by the U.S. military and ethically support fielding autonomous weapons based on just war theory principles and American values. The 17 March 2017 Mosul bombing incident, which killed over one hundred Iraqi civilians, serves as a reminder that the American people expect the military to respect the laws of war and fight in a manner consistent with American values.29 The four-phase approach argued here recognizes and builds on this expectation. The responsible crawl-walk-run methodology allows for experimental improvement and increased experience while adequately protecting innocent lives and clarifying responsibility. By following this concept, or one like it, the U.S. military can protect innocent lives while developing complex and dangerous military technology required for future conflicts.


Notes

  1. Jerry Kaplan, “Robot Weapons: What’s the Harm?,” New York Times (website) 17 August 2015, accessed 12 April 2018, https://www.nytimes.com/2015/08/17/opinion/robot-weapons-whats-the-harm.html.
  2. Kathleen Lawand, “Fully Autonomous Weapon Systems,” (presentation, Seminar on Fully Autonomous Weapon Systems, Mission permanente de France, 25 November 2013, Geneva), accessed 13 April 2018, https://www.icrc.org/eng/resources/documents/statement/2013/09-03-autonomous-weapons.htm.
  3. National Defense Authorization, Fiscal Year 2001, Pub. L. No. 106-398, 114 Stat. 1654 (2000), 40.
  4. Danielle Muoio, “Russia and China are Building Highly Autonomous Killer Robots,” Business Insider, 15 December 2015, accessed 13 April 2018, http://www.businessinsider.com/russia-and-china-are-building-highly-autonomous-killer-robots-2015-12.
  5. Simon Parkin, “Killer Robots: The Soldiers That Never Sleep,” BBC, 16 July 2015, accessed 12 April 2018, http://www.bbc.com/future/story/20150715-killer-robots-the-soldiers-that-never-sleep.
  6. Ibid.
  7. Ronald C. Arkin, “Ethical Robots in Warfare,” IEEE Technology and Society Magazine 28, no. 1 (Spring 2009): 30–33.
  8. Department of Defense Directive 3000.09, Autonomy in Weapon Systems (Washington, DC: U.S. Government Publishing Office, 21 November 2012), 3.
  9. Ibid.
  10. Ibid.
  11. “United States Navy Fact File: Aegis Weapon System,” U.S. Navy, last updated 26 January 2017, accessed 12 April 2018, http://www.navy.mil/navydata/fact_display.asp?cid=2100&tid=200&ct=2.
  12. Brian Orend, The Morality of War, 2nd ed. (New York: Broadview Press, 2013), 137.
  13. Patrick Lin et al., “Robots in War: Issues of Risk and Ethics,” in Ethics and Robotics, ed. Rafael Capurro and Michael Nagenborg (Amsterdam: AKA, 2009), 50; Arkin, “Ethical Robots in Warfare.”
  14. Arkin, “Ethical Robots in Warfare.”
  15. Orend, The Morality of War, 137.
  16. Michael Walzer, Just and Unjust Wars, 4th ed. (New York: Basic Books, 2006), 138, 151.
  17. Ibid., 145.
  18. Orend, The Morality of War, 113.
  19. Walzer, Just and Unjust Wars, 152.
  20. Ibid., 153.
  21. Ibid., 153, 155.
  22. Ibid., 157.
  23. Gideon Lewis-Kraus, “The Great A.I. Awakening,” New York Times (website), 14 December 2016, accessed 12 April 2018, https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html?smprod=nytcore-ipad&smid=nytcore-ipad-share&_r=0.
  24. Walzer, Just and Unjust Wars, 38-40.
  25. Lin et al., “Robots in War: Issues of Risk and Ethics,” 56.
  26. Ibid.
  27. Walzer, Just and Unjust Wars, 317.
  28. The American War Library, “The American Friendly-Fire Notebook,” The American War Library online, accessed 12 April 2018, http://www.americanwarlibrary.com/ff/ff.htm.
  29. Bill Chappell, “Pentagon Blames 105 Civilian Deaths from Mosul Strike on ‘Secondary Explosion,’” NPR, 25 May 2017, accessed 12 April 2018, http://www.npr.org/sections/thetwo-way/2017/05/25/528925544/report-on-u-s-airstrike-that-killed-civilians-in-mosul-to-be-released-thursday.
 

Maj. Zachary L. Morris, U.S. Army, is an infantry officer and student at the Command and General Staff College, Fort Leavenworth, Kansas. He holds a BS from the United States Military Academy and an MA from Georgetown University. His assignments include three deployments to Operation Enduring Freedom with the 101st Airborne Division and 1st Armor Division.