Publishing Disclaimer: In all of its publications and products, NCO Journal presents professional information. However, the views expressed therein are those of the authors and are not necessarily those of the Army University, the Department of the US Army, or any other agency of the US Government.

The Ethics of Robots in War

By Sgt. Maj. Ian M. Shaughnessey

Sergeants Major Academy

February 2, 2024

Download the PDF

A futuristic Army robot stands in a street with burning cars

The world is evolving and expanding exponentially, with corporations, small businesses, and even individuals of all lifestyles fully integrating technology, specifically machines, and robots into their lives to make daily activities easier.

Machines can perform a variety of simple to complex operations. They range from inexpensive household items like coffeepots that brew individual cups of coffee when and how you want them to multimillion-dollar robotic systems programmed to defend our nation.

In robotics, innovation faces opposition despite its advantages, especially when it comes to military applications. Robots are cheaper to make than training and caring for living, breathing human Soldiers. Robot programming allows for specific guiding rules that allow for split-second decisions, minimizing errors caused by deficient human skills and reducing the potential for collateral damage.

If the U.S. Army used robotic Soldiers, it would also significantly increase military capabilities, further establishing our place as a world superpower by keeping us ahead of potential competition with near-peer adversaries.

Despite societal concerns, both in the U.S. and worldwide, the military needs to evaluate, understand, and fully embrace robotic technology to ensure our nation’s welfare and humankind’s future.

Main Concern

Is using robots as Soldiers humane? Is our society ready to make the transition from our youths’ flesh and blood to robots? The answer is more complex than it appears. The humanity of war and the effect of robot integration are two of the main concerns.

According to Penn State ethicist Alan Wagner, two main ideas indicate that using robots is unethical. The first is that using robots will reduce the risks so much that future wars will increase (Wagner, A., 2017). Fatalities make war real and apply political pressure on governments as a deterrence. The second argument targets robots themselves and their inability to differentiate between combatants and noncombatants, which means there is a potential for higher civilian casualties (2017).

While human casualties create governmental pressure and indiscriminate deaths are possible, the opposite is also true. Using robots logically removes human Soldiers from the battlefield, automatically reducing casualties. Also, robots would adhere to doctrine without emotional interference (Wagner, A., 2017).

Ultimately, the main question concerns the decision-making aspect of robotic warfare. Should robots be able to make autonomous decisions about killing human beings? Or should humans continue to make the final decisions? “We will always have two factions, one for and one against the use of robots in wars. However, the use of robots in combat is inevitable” (Joshi, N., 2022). The true concern should focus on how using robots can influence the U.S. Army.

Impact on the Army

The Army is at a crossroads. Lawmakers, government officials, and society must decide how robots are employed in the Army. They can and will play a major role in the future of warfare; it is just a matter of when.

When considering robots and their effect on the Army, three specific areas exist. They include the cost of Soldiers versus robots, recruiting issues, and improving current capabilities in drones and other robotic systems.

Cost

Every day, the Department of Defense (DOD) spends millions of dollars on training Soldiers to wage war. Yet, it remains indecisive about integrating robots as Soldiers. Training one Soldier costs between $50,000 to $100,000, with an annual cost of at least $100,000 to maintain the Soldier’s health, training, and other requirements (like salary and housing). In contrast, robots require about the same initial cost but much less in maintenance and storage.

What about the cost of Soldiers’ lives?

With simple technological advances in “aerial campaigns and the development of extremely sophisticated weaponry,” American casualties dropped from 418,500 during World War II to less than 400 during Operation Desert Storm (Springer, P., 2019). The cost of U.S. Soldiers’ lives in the Iraq and Afghanistan conflicts were also minimal because of technological advances, such as Explosive Ordnance Disposal (EOD) robots and drones.

U.S. Army Staff Sgt. Elise Denning, assigned to Artificial Intelligence Integration Center, conducts maintenance on an unmanned aerial system in preparation for Project Convergence at Yuma Proving Ground, Arizona

Ultimately, while money is important, it cannot be the deciding factor when it comes to the impact of Soldiers versus robots. Using robots means fewer Soldier casualties and will ultimately cost less in the end.

Recruiting

When looking at cost, we must also look at current recruiting issues. “In May [of 2022], the Army chief of staff, Gen. James McConville, testified before Congress that only 23% of Americans ages 17-24 are qualified to serve without waivers to join, down from 29% in recent years” (Kube, C., and Gains, M., 2022).

By using robots, the need for human Soldiers would be lessened as would the urgent need for military services to continue to press a shrinking population of eligible Americans. Using robots in the U.S. Army would reduce the cost of recruiting, both from the monetary and behavioral health impacts.

Suicide rates among Soldiers, especially Army recruiters, are already a major concern and current statistics indicate problems to come if the Army continues to prioritize human Soldiers over robots.

Status

While military services struggle to make their recruiting mission, robots are already in use. Currently, we use robots in EOD, Unmanned Aerial Systems (UAS), and missile system guidance (Patriot), just to name a few.

U.S. Army Futures Command (AFC) is working on several projects to drastically increase the use of robots. However, they are not at the level required for effective change. Some of AFC’s projects include the Optionally Manned Fighting Vehicle (OMFV), an Extended Range Cannon Artillery system, a future attack reconnaissance aircraft, and the new Lower Tier Air and Missile Defense Sensor for the Patriot air defense system radar (Strout, N., Judson, J., & Pomerleau, M., 2022).

While these systems help, they do not influence the Army enough to show robots will eventually reduce the number of Soldiers it requires. Leadership changes and budget limitations remain obstacles to improvement, but we will not be prepared to compete with the rest of the world if we do not increase our efforts.

Root Causes

Leadership ideologies and money issues inhibit using robots, but the main reasons we are reluctant to increase efforts come from fear of losing humanity. What if a robot army becomes a weapon of mass destruction (WMD)? Who takes responsibility when robots make mistakes? These are valid questions and require real answers, but they should not and cannot be the reason for zero progress.

A simple breakdown of the root causes for inaction include:

  • Fear – Fear of the unknown and the humane side of war driven because the possibility of reducing risk to human troops may escalate new conflicts, negatively impact government negotiations, or even have robots accidentally start new wars (Consigny, C., 2022).
  • WMDs – Some worry robots and AI could become WMDs, that cyber hackers could take control of them and use them against us, or that AI could also be used against us, all of which could cause mass death and destruction (Pasquale, F., 2020).
  • Responsibility – Who is responsible when autonomous robots make mistakes? If their mistakes lead to fatalities, how do you hold robots accountable? Depending on policies, rule of law, and rules of engagement, it may be difficult to hold anyone accountable, especially if robots make independent decisions (Consigny, C., 2022).

Solution

There is a solution. Most, if not all the concerns come from a lack of established rules concerning robot use in the military, specifically the ability to control them and limit AI.

To manage this issue, we must create global study groups with NATO and our allies to create rules governing robotics in the military. Until we can agree on these rules, we cannot progress. Nations that do not adhere to the rules will advance without our support.

The U.S. created AI ethics principles through the Defense Innovation Board and sets an example for international use:

  1. Responsible: Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of DOD AI systems.
  2. Equitable: DOD should take deliberate steps to avoid unintended bias in developing and deploying combat or non-combat AI systems that could inadvertently cause harm to persons.
  3. Traceable: DOD’s AI engineering discipline should be sufficiently advanced such that technical experts possess an appropriate understanding of its system technology, development process, and operational methods, including transparent and auditable methodologies, data sources, and design procedures and documentation.
  4. Reliable: DOD AI systems should have an explicit, well-defined use domain, and the safety, security, and robustness of such systems should be tested and assured across their entire life cycle.
  5. Governable: DOD AI systems should be designed and engineered to fulfill their intended function, able to detect and avoid unintended harm or disruption, and capable of human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior (Cole, S., 2019).

The U.S. leads the way despite leadership changes and funding issues, and continues to do so in ethical ways, with international support.

When the time comes for the Army to use robots, the U.S. must set an example for their ethical use by showing how recruiting issues become less significant with risk reduction. The results could mean more white-collar jobs created for the Army.

an infantryman with the 101st Airborne Division, prepares to launch an RQ-11 Raven drone during a familiarization training at the Joint Multinational Readiness Center near Hohenfels, Germany

Finally, as stated in the fifth AI ethics principle, robots and AI have human minds in the decision loops, which shows robots are an ethical solution – a humane solution – and, when mistakes happen, the government and manufacturers are held responsible.

Ethical Lens

The proposed solution needs to be analyzed through ethical lenses. According to Dr. Jack D. Kem (2006), a U.S. Army Command and General Staff College and Sergeants Major Academy professor, ethical reasoning can be applied using three lenses: rules, outcomes, and virtues.

The rules lens considers if the solution follows existing rules that need adjustment or enforcement. The outcomes lens simply considers if the solution produces the greatest good for the greatest number. The virtues lens looks through a perspective of desirable virtues, like the Army Values, when defining courses of action.

Rules

When applying proposed solutions through principle-based ethics or rules, there are mixed results. The U.S. has rules managing military robot and AI use and development, but it has been unable to establish them at the global level, as shown during the 2021 Convention on Certain Conventional Weapons (CCW) (Consigny, C., 2022).

Global-level rules still do not exist, though regular conferences to discuss them do. The U.S. is not helping solidify a set of rules, because it is part of a minority that wants to allow robot use while most NATO nations do not.

We need to learn to compromise, develop a set of rules, and set an example for other nations, to show that ethical robot employment can happen. In the end, through the rules lens, using robots as Soldiers is ethical if a) humans remain a part of the decision process when it comes to lethality and b) the international community agrees on a set of rules. Until then, the rules lens will limit robotics from progressing quickly.

Outcomes

Using robots as Soldiers would greatly reduce human casualties, whether through precision strikes against human casualties or other robots.

There are significant concerns in this view, but when considering consequence-based ethics or the outcomes lens, the bottom line is that mass-producing machines for national defense dramatically reduces the risk to American lives. What’s more, creating these machines allows the U.S. government to focus more resources on domestic needs. Again, this lens requires strict adherence to the principles governing robot and AI use, but framing things this way clearly shows the advantages of using robots as Soldiers.

A giant robot walks over a burning forest

Virtues

Considering this lens, nothing changes when using robots. We must remain dedicated to the Army Values and ensure robot operators hold themselves to the highest moral standards and align robot programming with those principals.

More robots equals fewer human Soldiers, which means we must maintain the highest moral and ethical standards to ensure only the most qualified are in control.

The Army is a values-based organization, and using robots would only enhance values that support solutions. The only concern rising from the virtues-based ethical lens comes if the number of decisions human operators must make become too burdensome in the height of battle.

AI may also lead human operators to specific decisions rather than the other way around (Wright, A., 2020). To counter this, there must be a dedicated human-to-robot (AI) balance to ensure hasty decisions are not made.

Conclusion

The decision to use robots requires significant research, planning, and deliberate execution. Research time is over. The Army needs to plan and execute.

“Ultimately, AI will, and is being used for war, in both active and supporting roles” (Wright, A., 2020). By examining its impact on the Army, the main concerns, and the root causes, we find that a solution exists, and the U.S. needs to put more emphasis on it.

Rather than limit military robot and AI progression, we must embrace societal concerns, both in the U.S. and worldwide, and use that as motivation to evaluate, understand, and embrace robotics technology to ensure our nation’s welfare and humankind’s future. The time is now for America to set the example on the ethical use of military robots.


References

Cole, S. (2019, November 20). As military robots gain traction, ethical-use guidelines emerge. Military Embedded Systems. https://militaryembedded.com/ai/machine-learning/as-military-robots-gain-traction-ethical-use-guidelines-emerge

Consigny, C. (2022, February 8). Are killer robots better Soldiers?: The legality and ethics of the use of AI at war. Human Rights Pulse. https://www.humanrightspulse.com/mastercontentblog/are-killer-robots-better-soldiers-the-legality-and-ethics-of-the-use-of-ai-at-war

Joshi, N. (2022, July 25). Is it ethical to use robots in war? What are the risks associated with it? Forbes. https://www.forbes.com/sites/naveenjoshi/2022/07/25/is-it-ethical-to-use-robots-in-war-what-are-the-risks-associated-with-it/?sh=190de8d02d33

Kem, J. (2006). Ethical Decision Making: Using the “Ethical Triangle.” CGSC Foundation. http://www.cgscfoundation.org/wp-content/uploads/2016/04/Kem-UseoftheEthicalTriangle.pdf

Kube, C., and Gains, M. (2022, August 11). The Army has so far recruited only about half of the Soldiers it hoped for fiscal 2022, Army secretary says. NBC News. https://www.nbcnews.com/news/military/army-far-recruited-half-soldiers-hoped-fiscal-2022-rcna42740

Pasquale, F. (2020, October 15). ‘Machines set loose to slaughter’: the dangerous rise of military AI. The Guardian. https://www.theguardian.com/news/2020/oct/15/dangerous-rise-of-military-ai-drone-swarm-autonomous-weapons

Springer, P. (2019, April 23). Military robotics might enable conflict while reducing costs. Foreign Policy Research Institute. https://www.fpri.org/article/2019/04/military-robotics-might-enable-conflict-while-reducing-costs/

Strout, N., Judson, J., and Pomerleau, M. (2022, January 10). The US Army sees a future of robots and AI. But what if budget cuts and leadership changes get in the way? Defense News. https://www.defensenews.com/land/2022/01/10/the-us-army-put-experimentation-and-prototyping-at-the-core-of-its-modernization-initiative-is-it-working/

Wagner, A. (2017, February 24). Ask an Ethicist: Is it ethical to use robots to kill in a war? Penn State Today. https://www.psu.edu/news/impact/story/ask-ethicist-it-ethical-use-robots-kill-war/

Wright, A. (2020, April 12). War machines: Can AI for war be ethical? The Cove. https://cove.army.gov.au/article/war-machines-can-ai-war-be-ethical

 

Sgt. Maj. Ian M. Shaughnessey is currently the Division Surgeon Sergeant Major for the 11th Airborne Division and U.S. Army Alaska.

Back to Top