Facebook Twitter LinkedIn Email App
 

Developing Readiness to Trust Artificial Intelligence within Warfighting Teams

Chaplain (Maj.) Marlon W. Brown, U.S. Army

Download the PDF depuy

 

Composite graphic by Arin Burgess, Military Review.

We are at the beginning of a rapid integration of artificial intelligence (AI) into military operations. The National Security Strategy of the United States lists the rapid progression in the field of AI as one of several emerging technologies critical to national security.1 The Summary of the 2018 National Defense Strategy of the United States of America echoes the concern and addresses the need to “invest broadly in military application of autonomy, artificial intelligence, and machine learning, including rapid application of commercial breakthroughs, to gain competitive military advantages” as part of modernizing key capabilities to build a more lethal force.2

The Joint Artificial Intelligence Center is charged with carrying out the newly developed Summary of the 2018 Department of Defense Artificial Intelligence Strategy. The strategy includes the collaboration of defense assets with academic and commercial partners to develop and implement technology.3 A component to this modernization approach is the Defense Advanced Research Projects Agency, for which the president has requested a $3.556 billion budget for fiscal year 2020. The named project “Artificial Intelligence and Human-Machine Symbiosis” is expected to cost more than $161 million in 2020, a 233 percent increase from the 2018 budget.4

Currently, AI integration is limited and has yet to alter warfighting significantly, especially at the tactical level. Humans are still in full control. Because civilian and military leaders are cautious about entrusting any AI analysis and decision-making that may directly affect human life, many expect this norm to continue. However, this type of human and technology partnership is likely to change because adversaries will challenge the United States with their own robust use of AI. No matter how many prominent science and technology heavyweights propose banning autonomous weapons or how reasonable arguments against AI development may be, the “AI genie of innovation is out of the bottle: it cannot be stuffed back inside.”5 Adversaries are investing highly in the technology and so is the United States.

Since future wars will be characterized by the use of rapidly developing AI systems, the military force must be ready to accept this new technology. Readiness is not simply an issue of developing and fielding the right AI systems. Readiness will include solutions to ethical and moral questions like, “Will soldiers be willing to go to battle alongside robots?”6 When answering this type of question, one must consider the ability of human warfighters to trust artificial systems within the team. By leveraging our current doctrinal concept of trust in cohesive teams and evaluating factors that can lead to an individual decision to trust, soldiers can develop a readiness to trust the AI systems soon to be integrated with warfighting teams.

What Is AI?

Before considering the issue of trust in AI, it is important to understand the varied nature of the technology. AI technology is not static, and rapid developments continue to move the goalposts for understanding the technology and how the issue of trust with AI systems should be treated. One can find numerous terms to differentiate types and examples of AI in a quick internet search. A useful means of categorization of AI types and the one used throughout this article is artificial narrow intelligence (ANI) and artificial generalized intelligence (AGI). All current AI systems operate in the realm of ANI, in which the system focuses only on narrow tasks. Apple’s Siri is one of the most well-known AI systems and is capable of only a narrow set of tasks related to Apple products. ANI systems can only do what they have been designed to do.

AGI, on the other hand, is the future of AI, whereby machines possess intention and self-awareness. AGI systems, like humans, will be generalists and will be able to apply learned information to a wide variety of tasks and experiences. Philosophical terms are often applied in discussions about AGI. In addition to intention and self-awareness, terms like sentience (the capacity for feeling) and agency (individual power to act) are commonly encountered descriptors for the kinds of AI we categorize as AGI. To put it simply, AGI will be human-like in terms of higher-level thought and emotion. Fictional characters like the Terminator, Wall-E, and Star Trek: The Next Generation’s Data are all AGI systems. While many fictional AGI systems have humanoid forms, developing ANI and future AGI systems may have robotic components or audiovisual projections, or they may exist in cyberspace without human-like interfaces. Trust in ANI and trust in AGI will have different natures based on the definitions and experiences of trust within the military.7

Doctrinal Trust within Military Teams

Army doctrine recognizes the importance of trust in military teams. Mutual trust is basic to the practice of mission command. “Trust is gained or lost through everyday actions more than grand or occasional gestures. It comes from successful shared experiences and training, usually gained incidental to operations but also deliberately developed by the commander.”8 The Army considers trust among soldiers as “reliance on the character, competence, and commitment of Army professionals to live by and uphold the Army Ethic.”9 The overall level of trust necessary to build an effective warfighting team is hard to overstate.

War is a human endeavor, but the integration of AI complicates the historical understandings of the nature of war by threatening to replace at least some flesh and blood of military teams with hardware and software. Even if the nature of war is ultimately unaffected by AI (an unlikely proposition), the character of war is expected to be wholly affected by its full integration. Inventor and author Amir Husain suggests that one of the most significant changes to the character of war due to the growing capabilities of AI is the speed of battle at the tactical level.10 What happens when human minds and decision systems can no longer keep pace with the autonomous machine actions of the enemy? While decisions to go to war and how to conduct an operation may allow time and space for human contemplation and analysis, tactical units may find it existentially necessary to depend upon AI to make and execute lethal decisions on the battlefield. In such a scenario, AI would clearly be a member of a cohesive warfighting team requiring trust. Therefore, a conversation about trust between man and machine is warranted.

A shift to consider trust with nonhuman actors does not seem alien when we realize that trust with nonhuman actors is already present in military operations. Perhaps the best modern example of mutual trust between humans and nonhuman actors is the relationship of working dogs to their handlers. Very close relationships are made between dogs and handlers, closer than that of most common pet owners. What makes the working dog unit unique is the level of trust that handlers build with their dogs. Working dogs are trusted to not only accomplish the routine tasks for which they are trained but also to protect their partners in the face of danger, including existential danger.

The trust a human can have in ANI, not having character or commitment, is only a trust in the competence of the system. ANI is expected to demonstrate competency in a wide variety of responsibilities like accurately identifying threats to critical assets and determining mitigations. It will also likely accurately target enemy actors on the battlefield. Additionally, it may be able to recognize symptoms of depression among team members and recommend a treatment.

Trust in ANI is closer on a spectrum to the kind of trust warfighters can have in a weapon system or a planning tool than to the trust in one another. Tools, whether made of steel or algorithms, should not be treated as true “members” of a team, even when an emotional attachment develops. The level of attachment to an ANI system does not change the nature of the system. It is clear that Tom Hanks’s character in Cast Away felt an attachment to a volleyball he lovingly named “Wilson.” He may have even felt “trust” in Wilson, confiding in it his intimate thoughts. No matter the level of attachment, Wilson was only a piece of leather and rubber. It was a tool for maintaining the castaway’s sanity. Although ANI may be able to act autonomously, autonomy does not equate to agency. Human warfighters must be careful to distinguish their trust in an ANI system within the team from their trust in the human and future AGI members of the team.

AGI will be different. It will have a form of “personhood” that will enable treatment as a trusted member of military teams. To ascribe to it a form of personhood is in no way an attempt to posit whether a sentient machine is a form of life or whether it deserves legal protections as such. Those ethical questions should receive adequate attention elsewhere. Considering AGI as a form of personhood is to not only recognize that it may have competency like ANI but also character and commitment. It will be able to set and accomplish tasks apart from those directed by the commander or agreed upon by the team. Some tasks will be unrelated to the military mission. AGI will have “personal” goals and act to pursue them. This may be understood as creativity. An important part of AGI’s ability to act creatively and with the character prized by the military will be its ability to act in opposition to its own set goals, especially goals related to self-preservation.

Understanding the Decision to Trust AI

Since trust in, and possible mutual trust with, AI systems as part of a cohesive team is necessary, how can warfighting team members develop individual readiness to trust? Robert F. Hurley developed a model that enables the understanding of trust and how it can be built.11 His Decision to Trust Model (DTM) looks at the issue of trust from both the trustor and trustee perspectives. Although the model is of greatest use for interpersonal relationships between and among humans, it can be applied to more impersonal relationships such as an individual’s trust in an organization or a system like AI. Ambiguities and inconsistencies inherent in the broad scope of human trust in AI systems make the application of the model significantly more complex than when applied to the trust relationship between individual humans. Nevertheless, an attempt will be made here to consider the decision to trust through DTM.

Hurley splits ten essential elements of trust into two categories. The first category is made of three trustor factors that relate to an individual’s foundational disposition to trust: risk tolerance, psychological adjustment, and relative power. These are factors that exist for a person without concern for a particular situation or trustee. His or her disposition to trust based on this category would apply to a romantic relationship just as it would to a business relationship.

A person’s risk tolerance strongly influences that individual’s willingness to trust. Generally, when risk is high, then trust is limited; however, practitioners of mission command are accustomed to providing trust even in high-risk situations. When commanders trust their subordinates to execute disciplined initiative based on mission orders, they do so in part because they understand how leaders make decisions. Leaders are trained in certain methodologies, like the military decision-making process and rapid decision-making, both of which aid in making decisions and explaining to outsiders how the leader arrived at the decision. Common language and common processes aid warfighters in trusting one another because they can imagine the steps that were likely taken to arrive at any one decision. This kind of insider knowledge is needed in the human-machine relationship.

Of course, AI presents various risks along a spectrum of severity depending on its application. Possible risks include benign malfunction, system infiltration by adversaries, and rogue action with lethal consequences. Any one high risk or the aggregate of risks may not be a barrier to a soldier who has a high-risk tolerance. On the flip side, even a minor risk could be enough to prevent a soldier with low-risk tolerance from deciding to trust AI.

The second individual factor, psychological adjustment, concerns how well adjusted an individual may be. Well-adjusted individuals tend to have a greater comfort level with themselves and the world around them. This leads to a greater capacity to trust and for such trust to come quickly. Though the military consists of individuals along the spectrum of psychological adjustment, the military as an institution promotes and provides the educational and experiential opportunities for improved adjustment among its members. Training results in greater self-confidence. Uniformity helps to diminish racial and socioeconomic insecurities, issues that may hamper positive adjustment apart from the organization. Quick acceptance and adoption of new missions, equipment, and team members is valued. All of these things work toward improved individual psychological adjustment that will be helpful for the integration of AI.

While the psychological adjustment of members of the newest generation is as varied as it was for members of previous generations, it is apparent that near-term prospective soldiers have a greater overall comfort with the integration of technology. This is because of the technology creep that has become part of the fabric of human experience in the twenty-first century. Generation Z’s affinity for technology is well documented.12 They were born into a world of technology and have embraced it throughout their development. Because AI will become more ubiquitous in civil applications, future soldiers are more likely to enter the force with the necessary psychological adjustment to trust AI. Their experiences and level of trust with military applications of AI will be predicated on their experiences with it as civilians. It is conceivable that a generation from now the issue of human warfighter trust in AI will essentially be a societally resolved one.

The final individual factor, relative power, helps determine an individual’s disposition to trust based on the individual’s power, or lack thereof, over a trustee. Individuals who carry significant power based on their position in a group are more likely to offer trust to others as they have the ability to punish transgressors of that trust or to modify, and even end, the relationship with trustees. If regulations and policies related to AI codify the universal supremacy of human warfighters over AI systems, then a member of the military will be assured relative power that may enable greater trust in AI. If AI is granted the ability to operate or act in any circumstance that overrides the desires of a human team member, relative power is situational and trust becomes more difficult.

As stated in the introduction, there is general agreement about the subordination of AI to human warfighters and great caution about substituting AI for humans in decisions that have lethal effects. This is a comforting position to have as the military wades into the future. It is a position that offers individual service members an immediate win for the relative power trust factor. Yet, as AI integration increases, there will be unforeseen consequences that may change the relative power dynamic. For example, if a human override of an AI effort results in fratricide or collateral damage that would not have occurred if the AI effort had been permitted, will there be a reexamination of the power dynamic between humanity and machine? Perhaps the successful use of AI in warfighting teams will earn AI a greater position of relative power that is refused it in early stages of integration. There could be a time when the capability value of AI exceeds the humanitarian concerns of human warfighters, thereby disrupting the relative power factor for a decision to trust.

Team Kaist’s winning robot, DRC-Hubo

Hurley’s second category in the DTM consists of seven situational factors that can be influenced by the trustee to earn the trust of the trustor: situational security, similarities, interests, benevolent concern, capability, predictability/integrity, and communication. It may be helpful to have the flexibility to evaluate these factors by identifying the trustee to be AI alone or at times a combination of the AI system, the system developers, and the policy makers influencing implementation. This is because ANI, lacking intention and self-awareness, may be restricted by design from behaving outside the parameters established by the system developers. When considering interests, for example, as a situational factor in the decision to trust, such interests may be mostly a reflection of what the system developers have designed.

Situational security, capability, and predictability are all common expectations of any machine augmentation. Situational security is closely connected to the dispositional trust factor of risk tolerance. Because there is risk to the use of AI in military applications, it is important for AI to present situational security, the opposite of risk. Some risk exists simply because researchers, and therefore, users do not understand how AI processes information and comes to a conclusion. This fascinating reality has gained considerable attention. In partnerships within the science and technology ecosystem, the Defense Advanced Research Projects Agency is investing highly in Explainable AI (XAI). Such “third-wave” AI technology “aims to create a suite of machine learning techniques that produce explainable models while maintaining a high level of prediction accuracy so human users understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.”13 It is an attempt to bridge the gap between the decisions or recommendations made by an AI system and the ability of that human user to understand why the AI came to such a conclusion. Success in the field of XAI will significantly improve the situational security offered by AI to human trustors.

The factors of capability and predictability go hand in hand in the realm of technology and are quite simple to understand in the relationship with AI. It is an issue of system competence. Can AI do what it is advertised to do? Does it, in fact, surpass human capability in areas of information analysis, course of action development, or target identification? Experience with AI will likely lead to trustors recognizing that AI can do what it is designed to do with predictability demonstrated through rare failure or deviations from a norm. Society is generally convinced of the superiority of machines over humans in innumerable tasks. Essentially nobody questions or checks by hand the results of a computation made on a calculator because it has been used trillions of times to calculate mathematical problems without fail. Systems testing prior to implementation can ensure capability and predictability. Once fielded, if AI can demonstrate itself to operate in the same ways without error according to its defined functions, then it is influencing the trustor’s ability to trust.

The remaining factors—similarity, interests, benevolent concern, and communication—are much more difficult to examine in the relationship between a human warfighter and an AI system. Similarity and interests between man and machine are difficult to establish. This may be where attempts to create AI systems with anthropomorphic interface greatly benefit the decision to trust. Bonding with AI will likely be easier if it has a similar appearance or similarity in the way it communicates. A 2018 study of human interactions with a robot demonstrates the ability of humans to bond with machines that look and behave like humans.14 In the study, some participants interacted with a robot in a social way, and others interacted with it in a functional way. At the end of some interactions the robot begged not to be turned off. Participants who heard the plea tended to treat the robot as if it were another person. The study concluded that people are likely to treat a machine that has autonomous attributes more like a human and less like a machine or system that lacks autonomous attributes. AI systems developed with some anthropomorphic capability are more likely to promote trust.

It is possible that similarity and aligned interests can be achieved through ANI’s design for and application to warfighting tasks, its inherent purpose. If soldiers utilize an AI system at the tactical level that was created for or modified for tactical applications, then the system is demonstrating similarity to the warfighters operating in tactical environments. A future AGI system could experience a self-awareness that it exists, and even desires, to fight and win our nation’s wars. This would be a clear demonstration of similarity and alignment of interests with human warfighters.

Perhaps training environments can be developed that produce bonds between AI and human team members. The Army is accustomed to taking dissimilar people and turning them into uniformed personnel. Similarity and alignment of interests are commonly achieved through initial entry training. Diverse trainees from numerous “tribes” bond through training experiences to become part of a new “tribe.” Though diversity is still present, soldiers hold significant similarities with one another and share interests. Trust is an important by-product of such formative training and experiences. Humans who train alongside AI systems may enjoy the same byproduct.

The factor of benevolent concern is the ability of AI to put the needs of humans above that of itself. It is absolutely necessary that AI demonstrate the understanding that human warfighters are more valuable than any nonhuman parts of a team. Will AI destroy itself if it learns that it has been hacked by an adversary? Will AI sacrifice its existence to preserve human teammates? Even humans often opt to care more about themselves than those around them, and we often accept such selfishness in a dog-eat-dog environment. However, selfless service is a hallmark of military service and should, therefore, be required of AI. Like military working dogs, AI should be able to act courageously in defense of other warfighters and the mission.

Defense Department Chief Information Officer Dana Deasy (<em>center</em>) and Air Force Lt. Gen. John N. T. Shanahan

The Summary of the 2018 Department of the Defense Artificial Intelligence Strategy: Harnessing AI to Advance Our Security and Prosperity, released by the Joint Artificial Intelligence Center, articulates the department’s approach and methodology for accelerating the adoption of AI-enabled capabilities to strengthen our military, increase the effectiveness and efficiency of our operations, and enhance the security of the Nation. To view this publication, visit https://media.defense.gov/2019/Feb/12/2002088963
/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF
.

Future AGI systems, sentient machines, will likely have the capacity for the kind of courage that humans display. Courage, physical and moral, is an essential value for military members and an enabler to accomplish violent actions in support of strategic, operational, and tactical objectives. Although cohesive teams are built on mutual trust developed primarily from everyday actions, grand gestures like acts of bravery bolster trust and uniquely endear members to one another.15 During combat actions, service members are routinely inspired by the courageous acts of their comrades to accomplish more on the battlefield than would otherwise be possible. Bravery can become the instrument to break a stalemate, overcome impending defeat, and overwhelm an enemy force with violence of action. AGI that can behave in such a way will truly earn full trust from human teammates.

Finally, the communication factor impacts most other situational factors. Good and frequent communication is necessary for building trust. Communication with AI will certainly be situational. As previously covered, AI’s decision-making process is difficult to communicate to humans, a problem XAI seeks to resolve. AI systems will need an intuitive interface that promotes communication between it and the users. If there is ever a moment when AI is perceived to avoid communication or withhold information from human warfighters, trust will be harmed and possibly irreparably so. Frequent and transparent communication by AI systems with soldiers will help to foster trust development and trust maintenance.

Recommendations

The recently established Army AI Task Force (A-AI TF) under Army Futures Command was an important step related to the military development and implementation of AI.16 It is unknown what, if any, ethical issues are being studied in depth as part of A-AI TF projects. In cooperation with A-AI TF activities, the Army can accelerate the readiness of human warfighters to trust AI in four ways. First, the force must be better educated on the types of systems in development and their expected applications at strategic, operational, and tactical levels. The inherent secrecy of AI development in the military context complicates this endeavor, but there should be a means of promoting some of the planned applications of AI. It is not enough to proclaim, “AI is coming.” A-AI TF and other related organizations should pursue ways to communicate their activities to the broad audience of the U.S. Army.

Second, A-AI TF should study the trust factors that enable the individual decision to trust as they pertain to AI systems. It should seek to answer, through psychological assessments, whether the current force possesses the necessary disposition to trust AI as tools or members of warfighting teams. Findings should be published and recommendations made as to how to form trust with AI.

Third, mission command doctrine must include the concept of trust between humans and systems, especially autonomous artificially intelligent systems. Just as doctrine details the human trust necessary to build cohesive teams, it must detail the necessary trust of AI as partners in such teams.

Finally, every soldier should begin to evaluate his or her own readiness to trust the AI systems that will soon change the way we fight our nation’s wars. AI integration will change future warfighting teams, in some ways similar to the social and operating impacts made by the integration of women to combat arms military occupational specialties. Soldiers and leaders had to internalize the impacts of integration and make individual decisions and adjustments for new policies on combat arms training and operations. For AI integration, soldiers at every level should be provided time, space, and adequate information to ask themselves if they are ready and able to trust a system to accomplish important tasks in their warfighting team.

Conclusion

Future military operations will be characterized by the pervasive integration of AI with human warfighters. Some may argue that integration will be gradual, and that trust in AI will come naturally as an outgrowth of the current and common technology affinity and bias that society already possesses. Even if such an argument proves true, it will be important to understand the mechanics of such trust. It could also be the case that a large-scale combat operations will require rapid fielding of AI systems that will disturb the human warfighting-team cohesion. In such a case, even a basic awareness of the issue of trust in AI will aid the force to overcome the new challenges quickly. Using current doctrinal concepts of trust and an understanding of factors that lead to an individual decision to trust, the force can achieve a basic readiness to trust, and with the help of continued study by technologists, ethicists, behavioral scientists, and other interested professionals who serve the military community, the Army can achieve a high level of readiness to trust AI in cohesive warfighting teams.


Notes

  1. The White House, National Security Strategy of the United States of America (Washington, DC: The White House, December 2017), accessed 5 August 2019, https://www.whitehouse.gov/wp-content/uploads/2017/12/NSS-Final-12-18-2017-0905-2.pdf.
  2. Department of Defense, Summary of the 2018 National Defense Strategy of the United States of America (Washington, DC: U.S. Government Publishing Office [GPO], 2018), accessed 5 August 2019, https://dod.defense.gov/Portals/1/Documents/pubs/2018-National-Defense-Strategy-Summary.pdf.
  3. Department of Defense, Summary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance Our Security and Prosperity (Washington, DC: U.S. GPO, 2018), accessed 5 August 2019, https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF.
  4. Defense Advanced Research Projects Agency, Department of Defense Fiscal Year (FY) 2020 Budget Estimates (Washington, DC: Department of Defense, March 2019), accessed 5 August 2019, https://www.darpa.mil/attachments/DARPA_FY20_Presidents_Budget_Request.pdf.
  5. Amir Husain, The Sentient Machine: The Coming Age of Artificial Intelligence (New York: Scribner, 2017), 107.
  6. Andrew Ilachinski, Artificial Intelligence & Autonomy Opportunities and Challenges (Arlington, VA: Center for Naval Analyses, October 2017), 16–17, accessed 5 August 2019, https://www.cna.org/CNA_files/PDF/DIS-2017-U-016388-Final.pdf.
  7. Husain, The Sentient Machine, 9–48.
  8. Army Doctrine Reference Publication (ADRP) 6-0, Mission Command (Washington, DC: U.S. Government Printing Office, May 2012 [obsolete]), para. 2-5.
  9. ADRP 1, The Army Profession (Washington, DC: U.S. GPO, June 2015 [obsolete]), para. 3-3.
  10. Husain, The Sentient Machine, 89.
  11. Robert F. Hurley, The Decision to Trust: How Leaders Create High-Trust Organizations (San Francisco: Jossey-Bass, 2012), ProQuest Ebook Central.
  12. “How Generation Z Is Shaping Digital Technology,” BBC Future, accessed 5 August 2019, http://www.bbc.com/future/sponsored/story/20160309-youth-connection.
  13. A Review and Assessment of the Fiscal Year 2019 Budget Request for Department of Defense Science and Technology Programs Before the Subcommittee on Emerging Threats and Capabilities Armed Services Committee, U.S. House of Representatives, 115th Cong. (14 March 2018) (statement of Steven Walker, Director, Defense Advanced Research Projects Agency), 5–6, accessed on 5 August 2019, https://docs.house.gov/meetings/AS/AS26/20180314/107978/HHRG-115-AS26-Wstate-WalkerS-20180314.pdf.
  14. Aike C. Horstmann et al., “Do a Robot’s Social Skills and Its Objection Discourage Interactants from Switching the Robot Off?,” PLOS ONE 13, no. 7 (18 July 2018), accessed 5 August 2019, https://doi.org/10.1371/journal.pone.0201581.
  15. ADRP 6-0, Mission Command, para. 2-5.
  16. Mark T. Esper, Memorandum for Principal Officials of Headquarters, Department of the Army, “Army Directive 2018-18 (Army Artificial Intelligence Task Force in Support of the Department of Defense Joint Artificial Intelligence Center),” 2 October 2018, accessed 5 August 2019, https://armypubs.army.mil/epubs/DR_pubs/DR_a/pdf/web/ARN13011_AD2018_18_Final.pdf.

Chaplain (Maj.) Marlon W. Brown, U.S. Army, is the brigade chaplain for the 2nd Armored Brigade Combat Team, 1st Infantry Division, at Fort Riley, Kansas. He holds a BS from East Central University and an MDiv from Southwestern Baptist Theological Seminary. He has served as a chaplain in operational aviation, sustainment, field artillery, and psychological operations units in addition to previous assignments as an infantry and military intelligence officer.

 

Back to Top

January-February 2020