Artificial Intelligence and Multi-Domain Operations

A Whole-of-Nation Approach Key to Success

 

Dan G. Cox, PhD

Download the PDF depuy

Composite graphic by Arin Burgess, Army University Press.

Artificial intelligence (AI) will play a key role in multi-domain operations (MDO), but much remains unknown, and scholars and practitioners often hold unreasonable views regarding what AI is capable of and the extent to which it is dangerous to civilians on the battlefield. One of these views that will be covered later is that AI is something that can work with a degree of infallibility once implemented. Also, because AI systems are not produced within the U.S. military, there is a hole in U.S. military thinking that often blinds military leaders and prevents them from understanding the whole-of-nation (WoN) aspect of weaponizing AI. This is an especially complicated relationship in liberal democracies. Western scholars and practitioners often fear AI will turn into an evil Skynet architecture seen in the Terminator movies and reluctant to implement fully autonomous lethal systems. No such compunction exists in China, for example, where policy makers and pundits believe that what is good in humanity can be imbued into AI weapons making them trustworthy.1

Much has been written about the promise of AI. Pundits have extolled AI's virtues in economics, robotics, space exploration, and warfare. Experts argue the global AI economy will reach almost $4 trillion in 2022, and it is speculated to grow parabolically to $150 trillion by 2025. AI could streamline businesses, improve health-care systems, and lead to a robotics revolution.2 The fourth industrial revolution will rely heavily on AI to complement the current robotics revolution using faster quantum computing and unmanned aerial vehicles for delivery and observation (and in the case of war, lethality) while enhancing increasingly independent robotic systems.3

NASA offers an interesting glimpse into a problem set that it believes can be most effectively dealt with through the use of AI, which helps to illustrate why the U.S. military will need to utilize AI in a future MDO. NASA faces three main challenges exploring deeper into space, and these issues can only be addressed through the use of autonomous and semiautonomous AI systems. First, probes will frequently fall out of communication due to planetary obstructions and potential radiation spikes. The probes must be able to discern which data is important to gather during these potentially lengthy periods without direction from humans. Second, because the probes are scheduled to move through unmapped space, they must be able to sense and respond in novel ways to a complex environment that planners on the ground may not foresee. Further, this novel adaptation may have to occur during a period of communication blackout with ground control. Third, the distances planned to be traversed by NASA involve multiple lifetimes for the scientists on the ground, and the probes must be able to adapt autonomously over time.4

The problem set NASA faces in future space travel is akin to the problem set that commanders will face in future large-scale combat operations that demand multi-domain synchronicity. The U.S. Army Training and Doctrine Command Pamphlet 525-3-1, The U.S. Army in Multi-Domain Operations 2028, emphasizes convergence of military forces to disrupt enemy anti-access/area denial and other layered defenses to gain a temporary window of opportunity that can be exploited by ground forces to gain the initiative.5 Such convergence is likely beyond human planners alone and will necessitate some AI support. Units are likely to be cut off from command, and communications will likely be disrupted in the future, yet opportunities on the ground may necessitate independent action from both human- and AI-driven military platforms.

Despite all the early AI success, the potential it holds for civilian and military endeavors, and the positive economic impacts thereof, misconceptions in some military and civilian circles remain. In some ways, U.S. military officials underestimate the power of human-AI teaming; in other ways, military leaders overestimate the power of AI—believing it approaches something akin to "magic" with high levels of infallibility.6 The first misconception interacts in part with the second. A consistent Army misunderstanding of the limitations associated with a human-involved AI system often leads to overestimations of what AI can do on its own.

Human-AI teaming is often perceived as inferior to what an adversary could do with AI if it was allowed to operate unfettered. In some cases, it is. But in many cases, human-AI teaming is superior. The recent history of AI development contradicts the assertion that AI alone is always better. Prior to exploring the "human in the loop" misconception, we discuss some common misconceptions among military strategists as pertaining to what AI actually comprises.

Initial confusion comes from a misunderstanding of what AI is in reality, and what some in the media speculate it is becoming. The key is to understand which of the three types of AI are most prevalent currently and most likely to materialize and used in war in the future. There are three domains or levels of AI that scholars have identified. In a general sense, all AI systems fit into one of three categories: artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial super intelligence (ASI). ANI is a computer algorithm that is created and focused on a single problem. AGI is a complex program that can handle multiple domains/problems and, as it is perfected, should mimic human intelligence. ASI would have capacities greater than humans, including a great capacity for self-learning.

Considering these three initial categories, we are currently in the age of ANI. It is unclear how long this period will last, but because AGI and ASI are more intriguing and sensational concepts, they have received the most recent speculative coverage. Advances in ANI have been misconstrued as advances toward a singular event. The singularity is an event in which an AI becomes sentient, can learn on its own, and begins to advance far past human intelligence. But ANI is not an advancement near such an event. Much of this confusion could be avoided if ANI was viewed as a spectrum rather than a single category. A brief examination of gaming AI will illustrate this point.

In 1996 and 1997, IBM's algorithm "Deep Blue" defeated international chess champion Gary Kasparov. This was the first time an AI algorithm defeated a human in a game that was considered to rely on human intuition and could only be mastered after years of practice.7 Deep Blue did not win all of the games, but the fact that this early algorithm could win at least one game both years showed the potential of AI. In fact, the 1997 version of Deep Blue visibly shook Kasparov, and he remarked that some of the moves seemed to belie human intelligence.8

Graphic courtesy of Wesalius via Wikimedia Commons

Twenty years later, Google's AlphaGO beat the top Go player in the world: South Korea's Lee Sedol. This win was considered monumental. Chess, while complicated, has a finite number of moves and board positions, while Go is asserted to have more board positions than there are atoms in the universe.9 Go requires deep strategy, and while more complicated than chess, Go is still a rules-based game.

After Lee was beaten by AlphaGO, pundits were quick to point out that chess and Go were not similar to games like poker that required human intuition, bluffing, and playing where some key information was hidden.10 These same pundits could not imagine AI winning at poker in the foreseeable future. Yet, in less than a year, a series of algorithms beat top poker players. Shortly after AlphaGO repeated its performance in another tournament against humans in South Korea, the Libratus AI beat several top-ranked poker players in one-on-one Texas Hold 'em games. This was considered a startling feat far beyond even the recent Go victories, as poker was considered to be a game only humans could master.11 However, it was still argued that AI only won because it was a two-person game. Again, pundits argued that AI would struggle at a poker table with more than one human opponent.12

In 2019, an algorithm dubbed Pluribus beat five other human players at a six-player table over the course of ten thousand hands using a type of machine learning called "reinforcement learning." Reinforcement learning allows the computer algorithm to learn lessons from past instances, or in this case, hands, and update its strategy and play.13 The most interesting aspect of this training is that a human professional poker player was there to point out mistakes and help reinforce successes prior to the computer playing live humans. Here we have our first hint at the power of human-AI teaming.

The result of these advances in AI successes in increasingly complex human games was wild speculation about the coming singularity.14 However, the singularity would occur at the highest end of AGI. The range at this level might be envisioned as beginning with the first AI that passes the Turing Test, which means that its intellect is indistinguishable from human cognition. The singularity itself is defined as a self-replicating learning machine that could theoretically engage in an infinite amount of learning on a broad range of subjects and far exceed human cognition.15 In the West, especially in America, the singularity invokes an association with the Terminator movies and with fear.16

As aforementioned, the success AI had against humans in games previously thought unbeatable by algorithms caused some leaders in the technology and science fields to engage in wild speculation about the coming singularity. There was a recent period when major figures in the fields of industry and science warned about the potential dangers of AI. Stephen Hawking was one of the first to react, claiming that AI development could be the worst event in human history and that a singularity could easily be used to oppress humanity.17 Technology entrepreneur Elon Musk followed suit, warning that continued development of AI could result in machines "being our overlords."18 These dire warnings from public figures, coupled with the fear the Terminator movies have engendered in the minds of Americans, caused many senior military leaders to fear AI, particularly lethal autonomous weapon systems (LAWS). This view was erroneously exacerbated, due in part to the recent successes AI has had against human opponents in chess, Go, and poker.

The problem is that while the games AI defeated increased in complexity, all three were played with very specific rules. All of the AI successes fit into the lowest category of ANI. One of the methodological issues that AI scholars need to address is that AI categories represent more of a range than a discrete point. ANI could encompass something as simple as a Tomahawk missile that is programmed to accept GPS signals from space, or as advanced as the Pluribus program that engaged in an advanced form of machine learning. While this represents progression in AI, it is not as earth-shattering as pushing to the edge of a singularity as some have postulated.

Screenshot courtesy of Facebook

Alan Baldwin offers another way to further differentiate AI, which allows for a more nuanced understanding of progression. Baldwin adds four parallel categories that complement ANI, AGI, and ASI. These four categories are reactive, limited memory, theory of mind, and self-aware. Reactive AI would only respond to outside stimuli, while limited memory could use memories of experiences to learn and improve its responses. Theory of mind AI could understand and react to the needs of other intelligent entities, while a fully self-aware AI would have human-like intelligence or greater and be able to pass a Turing Test.19 This allows one to further understand that we are, at most, on a level of ANI that uses limited memory to master a narrow task. We are still a long way from the upper end of AGI and the singularity. This misunderstanding and Western cultural bias against AI are only some of the factors causing distrust.

The Problem of Trust

In the summer of 2017, Facebook created two artificial intelligence chatbots in an experimental lab. The purpose of this experiment was to create more human-like responses from chatbots and create chatbots capable of higher-level negotiation with humans. The creators had given the two AI entities, dubbed "Alice" and "Bob," a lot of leeway in how they used machine learning through interactions with each other and humans to improve their skills. By the end of the summer, Facebook researchers were surprised to find that Alice and Bob had created their own language in order to communicate and negotiate with one another more efficiently. Eventually, it became difficult for the researchers to determine what the chatbots were saying.20 The project was abruptly shut down, not due to the chatbots failing to achieve their goals but because the humans struggled to understand what the AI was doing. There was a crisis of trust between the AI and humans.

Maj. Bobby Monday notes that trust is one of the key factors preceding the effective use of AI in a U.S. Marine Corps formation like a Marine air ground task force. He argues that this trust bridge can only be built through constant schooling, training, and developing and interacting with AI programs and platforms.21 Some of this military-civilian collaboration has occurred, but it is not broad in nature, formally enacted with dedicated specialist officers, or holistic.

The U.S. military partnered with Google to develop an AI algorithm to help sift through targeting data to find viable military targets in the conflict with the Islamic State. This algorithm, called Maven, used reinforcement machine learning with the help of human intelligence readers who corrected mistakes made by Maven in the early learning stages. Despite Maven's improvement at identifying targets, U.S. Air Force Gen. Mike Holmes says he does not trust the system yet.22 Holmes wants Maven to improve significantly before he will trust the targeting data it provides.

This is not an unsound position to take, but the problem is that military commanders are unlikely to reach a point where "significant improvement" actually assuages their concerns. The problem of trusting AI is threefold. First, there must be a degree of explainability regarding what the AI is doing. Second, AI, in some ways, should be regarded as a potential trust agent or AI can never really be used in the true mission command sense of a battlefield agent. Third, the U.S. military, along with the United States as a nation, must decide whether AI can be trusted with lethality.

Left Quote

Trust is one of the key factors preceding the effective use of artificial intelligence in a U.S. Marine Corps formation like a Marine air ground task force.

Right Quote

Trust built through explainability can only be achieved if the U.S. military is intimately involved in the development, testing, and implementation of AI. The Defense Advanced Research Program Agency (DARPA) has several programs that can help to bridge the gap between military practitioners and AI developers. DARPA's Causal Exploration project seeks to use a textual analysis program to provide military planners with real-time causal links between actors in a complex operating environment and can be used to create more precise military inputs into a system.23 The U.S. Army School of Advanced Military Studies has been working with DARPA's Causal Exploration project for the past three years, integrating it into some of the design and system thinking exercises as an AI-augmented way to gather information about the operating environment. This allows military planners to build some trust with a developing AI program, and it allows DARPA developers to better craft their AI for use by military planners. This type of synergy should occur on a more regular basis between DARPA and civilian AI developers in an effort to build a bridge of trust between military end users and AI applications.

DARPA also has a program aimed directly at the problem of trust. This program is aptly named Explainable AI. Explainable AI is geared toward helping to create a link between AI programs and end users that allows end users to be comfortable with why AI is doing what it is doing. Instead of simply building an efficient program, as Facebook did, an emphasis is placed on building AI systems that are both efficient and have the capability to explain to humans what they are doing and why they are doing it.24 The issue, however, is that this DARPA project is not currently working closely with any military program.

The U.S. military also needs to be involved with AI development from the civilian sector. Some of this is occurring through Army Futures Command's Army Applications Lab, but more synergy with civilian business is necessary. Unfortunately, some civilian-military initiatives are already failing because Google ceased working on Project Maven due to ethical concerns.25 This has provided an opportunity for China to exploit, which will be addressed later.

Photo by Airman 1st Class Shannon Moorehead, U.S. Air Force

The U.S. Army has come up with an intriguing concept of mission command in which subordinate officers can take initiative on the battlefield in the absence of direct orders using the concept of prudent risk.26 In order for this to work, trust between the commander and subordinate must be established. Yet, with fast-moving, complex military environments, it is necessary to allow subordinates that ability to react to the operational environment without waiting for direct orders or confirmation.27 This intuitively makes sense when the relationship is between two human agents, but it often makes people uneasy when thinking about offering such trust or leeway to an AI agent.

The lack of trust with AI begins to blur into the third consideration of allowing AI autonomous lethality. Americans are so culturally driven by the Terminator, The Matrix, and other AI movies depicting rogue AIs enslaving or killing humanity that even the notion of a self-driving car becomes alarming.28 There is great reticence in both the American civilian and military circles regarding LAWS. This reflects the cultural bias in American society against AI autonomy in general and more pointedly against autonomous lethality, leading to an insistence on having humans in or on the loop of AI-enabled military platforms.

Yet, even this insistence represents a disconnect from reality as there are already many AI-automated killing systems. Sydney Freedberg notes that the Aegis cruiser defensive fires system can be set to automatic to track incoming salvos when a human would be overwhelmed, and the U.S. Navy's Phalanx and U.S. Army's counter-rocket, artillery, and mortar systems offer similar automated AI lethality. Each of these systems is aimed at incoming missiles but could target manned aircraft as well. Further, the U.S. Army is working on an active protection system small enough to fit on a tank.29 Future battlefield environments may necessitate beginning the battle with these AI systems on automatic as near-peer adversaries attempt to develop weapons and AI automation aimed at overwhelming a human target seeker. Anti-access/area denial strategies pursued by China and Russia are already being implemented with some lethal AI autonomy.30

Maj. Jerome Hilliard's examination of AI-enabled, autonomous logistical convoys drives home this point. Hilliard found, through his scenario development, that convoys would likely need some sort of AI-enabled active defensive measures to ensure they were not easily intercepted or disrupted. If the threat was low-end, comprising perhaps nothing more than an individual with a rocket launcher, the AI defensive system might have to react with lethal force against a human target.31 This is very similar to the Aegis example given above. The only difference is that the future setting Hilliard envisioned was on land.

The solution to this problem might be a new element of operational art: "grip." Trust seems more akin to something that would occur between humans and yet a certain level of trust is necessary in order that humans understand, and are at ease with, what AI is doing. However, when interacting with AI, even to the point of making it a LAWS, grip seems like a more relatable concept. Maj. Michael Pritchard pioneered this concept, arguing that there are four grip styles that could be implemented with future AI in military endeavors.

Photo by Marginon, Alamy Stock Photo

The four types are differentiated by the amount of role exchange (is the AI an autonomous fighter jet or does the AI sit on an X-Wing like R2-D2 in Star Wars and advise?) and the level of autonomy given to the AI. The four grip categories are loose-closed, tight-closed, loose-open, and tight-open. The analogy Pritchard operated under was one akin to the types of grips used in sword fighting. Loose-closed grip on AI would involve significant role exchange, allowing AI to design plans or actions but to have no control over implementation. Tight-closed grip would have low to no role exchange and little autonomy would be given to the AI. The AI is simply a tactical or informational assistant. Loose-open grip would allow the most autonomy and role exchange. The AI would largely act independently and could be an independent platform, like a loitering air frame, and involve either some minor human oversight or none at all. A tight-open grip involves a human-designed action that is given over to the AI to implement independently.32

The operational concept of grip is intriguing as it relates to AI and should be studied as a possible addition, allowing for trust and understanding between a subordinate and commander when that subordinate might be a non-human, AI platform, or program. Another possibility is integrating humans and machines. Trust would still be key as the U.S. military experiments with forming manned-unmanned teams (MUM-T). As Maj. Will Branch observes, "This concept is being employed with US Army's Unmanned Aerial Systems and AH-64 Apache helicopters. Through a process called Manned-Unmanned Teaming (MUM-T), Army aviators are able to employ unmanned systems in environments deemed too hazardous for manned aviation. MUM-T enables the UAS to utilize its strengths, reconnaissance and target acquisition, to maximize the strengths of the pilot, lethality and responsiveness. This concept serves as the basis for artificial intelligence human-machine teaming."33

Maj. Colin Sattler takes this concept further, speculating what a U.S. Army Aviation formation might look like in a future MDO. Sattler argues that current and future unmanned aerial vehicles, regardless of the AI onboard, have an inherent flaw of being tethered to a home station, thereby creating a critical vulnerability. His solution is to create an aviation formation that contains a few full-sized, human-manned attack helicopters surrounded by smaller semi-autonomous helicopters that are under the control of the human operators in the formation.34 Like the Aegis system, full autonomy could be switched on when necessary, giving operators a range of grips to utilize given the operational or tactical situation.

Photo by Marginon, Alamy Stock Photo

The Problem of Chinese Synergy of Economics, Government, and the Military

China has some advantages in AI development that it can exploit over the short term to create a window of asymmetric advantage during a large-scale, MDO against the United States. The Chinese advantage is exacerbated by the lack of coordination and understanding between the U.S. Army and Silicon Valley AI technology developers. The People's Liberation Army openly states that its military is seeking to develop an advantage in the weaponization of AI in the next decade through a fusion of efforts between the civilian and military sectors.35 China has a significant synergistic advantage in the military implementation of AI; this advantage and ways to counter it must be understood.

The United States is still the world leader in AI development, and the U.S. Department of Defense laid out $4 billion for AI development in 2020.36 However, Maj. Ian Morris observes that China is attempting to become the global leader in AI development by 2030.37 In one ancillary area of AI development, 5G, or fifth-generation wireless data systems, China is leading the world.38 This is an important development as it shows that the Chinese system can produce advanced technology and surprise the West, which had enjoyed decades of dominance in the field of wireless communication.

What many in the West still do not understand is that the Chinese government and economy are intertwined. Observers correctly note inefficiencies in a state-controlled economy, like the "zombie cities," which were overdeveloped and now lay devoid of inhabitants.39 Still, China now boasts the second-largest economy in the world and has produced amazing technological and manufacturing companies. China has become a sought-after market and trading partner and has even developed its own international bank, the Asian Infrastructure Investment Bank, to rival or replace the West's World Bank and International Monetary Fund.40

While it seems like capitalism has taken hold in China and some companies are acting in a semicapitalist manner, the Chinese Communist Party (CCP) still retains the ability to control any business within the country. Most importantly, the CCP has ready access to any data generated by a Chinese corporation. While this was most recently publicized by the potential for the CCP to spy on the Western world through Chinese-produced Department of the Interior drones, the ability of the government to use any data gathered from its immense population is actually a greater advantage for AI development.41

China has access to the health records and phone records of every citizen. In fact, there is very little data that the CCP cannot access. This has turned into a sort of Orwellian construct in the form of what the Chinese government is calling a "social credit score." The social credit score is generated from many datasets, but online posts, activity, and shopping are some of the key sources. A good social credit score can get a person faster internet and permission to travel abroad. A poor score can restrict a person's movement, even within China, and prevent that person from applying for a host of employment opportunities.42 The antidemocratic and free speech implications are evident, but what most people are missing are the benefits such a system has for improving AI.

Photo by Staff Sgt. Rachel Simones, U.S. Air Force

China is working in concert with large and small corporations to collect this data, and there are no domestic protests against invasion of privacy or in favor of company rights. This allows the CCP access to a gigantic and robust dataset on its 1.2 billion citizens. In the West, there are privacy protections. Data revolving around one's health is protected, and online shopping data, while not completely protected, is more difficult for the government to obtain. Google and Facebook can create large datasets by tracking user behavior, but this has caused a fair amount of consternation in the West. These datasets are not as large or comprehensive as the ones the CCP has access to, and large datasets are what allow AI to improve. This problem is not easily overcome.

One could argue that the United States is still the leader in AI technology and that China is at an innovation disadvantage because its capitalist incubator is not truly free. However, the overarching freedoms present in robust democracies can cause some collaboration issues between the military and civilian companies and create yet another advantage for the CCP. Three thousand engineers signed a petition against Google participating in a war-making capability in the form of Project Maven, and Google eventually stopped participating.43 China was able to use its governmental/economic synergy to gain access to Google's advanced AI.

China's gigantic internal market and growing middle class are also enticing to any large corporation, and Google is not immune. China can actually use its domestic market to not only entice but also blackmail companies into giving up AI algorithms. The Chinese government may even be requiring companies to give up pieces of algorithms prior to even being considered for entry into the Chinese market.44

Google is currently working on developing AI with a Chinese university and some other Chinese businesses, claiming that science knows no international boundaries. U.S. military officials are worried this collaboration is creating a competitive advantage in the weaponization of AI for China.45 Google may not understand the control and collusion between the CCP, the People's Liberation Army, businesses, and universities. The CCP controls it all and can easily take any joint AI venture and weaponize it. The Chinese government does not feel constrained by international law and there is no domestic public outcry against the weaponization of AI.

Left Quote

In one ancillary area of AI development, 5G, or fifth-generation wireless data systems, China is leading the world. This is an important development as it shows that the Chinese system can produce advanced technology and surprise the West, which had enjoyed decades of dominance in the field of wireless communication.

Right Quote

The U.S. military missed a huge opportunity to get ahead of the negative sentiment at Google by engaging in an active outreach program with Silicon Valley to allay some of the fears civilian technology workers have. A whole-of-nation (WoN) approach would mandate that some of the military's human resources be dedicated to engaging the rest of the Nation. The four services devote people to recruiting, public affairs, community outreach, and other programs designed to interact with civilians and civilian institutions. It is time to dedicate military personnel to deep interaction with WoN resources like Silicon Valley, industrial developers, and major suppliers and major distributors like Amazon. This would include encouraging select officers and enlisted personnel to shift from military to civilian careers in these fields, while maintaining ties with the military. The U.S. military emphasizes talent retention, but it should consider encouraging talent expansion as well. One obstacle that can be overcome involves follow-up with personnel transitioning to the civilian workforce. Currently, the U.S. military does not formally track and maintain ties with personnel who enter the civilian workplace, even if they have retired from a successful twenty-year career.

There is one bright spot in the form of a new military program aimed at direct collaboration with civilian businesses. In 2019, the U.S. Army began experimenting with one aspect of a WoN strategy at a division of Army Futures Command (AFC): the Army Applications Lab (AAL). The goal of AAL is to integrate "geeks in hoodies, defense contractors in suits, and soldiers."46 The AAL helps to incubate small startups by linking these entities with civilian defense contractors and defense innovators within the U.S. Department of Defense. This organization also hosts competitions like the "How-to-Kill-Drones Hackathon" in an attempt to further entice and integrate with civilian startups, programmers, and entrepreneurs.47

The author had the opportunity to visit AFC and the AAL a year after AFC announced it would be working with small, civilian businesses. A year after its inception, AAL had advanced considerably. The colonel in charge and all of the Army officers involved with the project wore only civilian clothes to work. The AAL itself looked more like a Silicon Valley company than a U.S. Army construct. Glass-walled offices ringed open areas filled with sectional sofas and large tables for collaboration. Everything in the office was geared toward collaboration in a civilian sense and large contractors like Booz-Allen Hamilton, which were necessary for small businesses to partner with to get through the two-year incubation period, had offices adjoining the AAL. Everything was streamlined for success and the attitude the military officers took toward the project was refreshing.

This is a good first step toward mitigating the Chinese advantage in exploiting not only its own AI businesses but also American companies. However, the U.S. Army should consider further steps to integrate and liaise with American technology companies. One of the first steps would include developing career technology liaison officers, similar to liaison officers who serve to bridge gaps between U.S. troops and international/coalition counterparts. A permanent bridge would help to ensure that the U.S. military and businesses producing AI would have a better understanding of one another and the synergies that need to develop in order to successfully defend America.

Left Quote

America's major adversaries are pursuing the weaponization of AI at breakneck speed in an effort to balance, and perhaps surpass, current U.S. asymmetric advantages on the battlefield.

Right Quote

Conclusion

Americans tend to think of choices in binary, all-or-nothing terms. They also often tend to argue that one choice is better than the other, believing one choice carries less risk or is more efficient. In complex human interactions, it is often difficult to actually gauge this, but cultural predispositions often play a central role. Ironically, iconic movies about rogue AI like The Terminator and The Matrix also play a large role in people's calculation. This leads to an initial distrust of AI (rather than a neutral examination of what AI can offer planners and tacticians) that must be overcome.

Another cognitive challenge revolves around confusion regarding where we actually are in AI development. Many sensational articles on the coming singularity, which feed into Terminator fears, are confusing military leaders. When a leader says AI equates to nuclear deterrence, that leader is referring to the potential for two singularities occurring simultaneously. This is a dangerous misconception, as it can cause military and political leaders to discount AI and its weaponization, while our adversaries forge ahead. The singularity is a long way off and may not even be possible. What we have today is at the low end. The ANI is narrow in nature, even if it can learn through repetitive practice. Yet, this narrow AI can perform functions on the battlefield that have the possibility of being a force multiplier or even a force replacer.

Photo by Staff Sgt. Rachel Simones, U.S. Air Force

America's major adversaries are pursuing the weaponization of AI at breakneck speed in an effort to balance, and perhaps surpass, current U.S. asymmetric advantages on the battlefield. Among America's adversaries, China is at the forefront of both AI development and the weaponization of AI. The U.S. military needs to formally engage with American technological businesses and create a democratic synergy. The U.S. government is not the CCP and it cannot dictate innovation or access to datasets. The military needs to build long-term trust relationships with the developers of AI in order to both allay the fears of Silicon Valley developers and build the understanding and trust of military operators. Leaders value soldiers, sailors, airmen, and marines, but replacing some of these individuals will actually make the personnel in the manned platforms safer, more adaptable, and better able to operate in a highly complex and interdependent MDO environment. Embracing a WoN strategy allows the U.S. military to leverage the civilian industries it will need in any future fight.

Finally, America already has AI systems that can be switched to autonomous should the human in the loop become overwhelmed. In a future fight, planners and tacticians should strongly consider that the enemy will be using AI and AI-enabled swarms to overwhelm U.S. military platforms. Therefore, the going-in position on at least defensive LAWS may have to be switched to the fully autonomous position. In the West, there is an unreasonable standard that AI cannot cause collateral damage.48 Instead, one should consider that war will always produce collateral damage, hardship, and pain. The question is which system is the most efficient in linking tactical actions in time, space, and purpose to achieve strategic goals and keep the United States in a position of advantage. Because AI is equated with Skynet from The Terminator movie, it is assumed that it will be evil if unfettered. However, AI could actually perform more consistently and with more morality than humans do. Perhaps it could be programmed with the laws of war and precision in targeting that in turn might limit war crimes and collateral damage. AI is coming to the battlefield. The sooner the U.S. military embraces it and its civilian sources, the better. The cooperative advancement of knowledge, trust, and AI platform development points toward the continued and future success of our military in a wide variety of settings around the world.


Notes

 

  1. John Brock and Dan G. Cox, "Why Robotic War Will Challenge Current Morality in War Thinking," Interagency Journal, Special Report: The Ethics of Future Warfare," 1 May 2018, https://thesimonscenter.org/wp-content/uploads/2018/05/Ethics-Symp-2017.pdf.
  2. Andrew Cave, "Can the AI Economy Really Be Worth $150 Trillion by 2025?," Forbes (website), 24 May 2019, accessed 6 January 2021, https://www.forbes.com/sites/andrewcave/2019/06/24/can-the-ai-economy-really-be-worth-150-trillion-by-2025/#6e5cadca3bf4.
  3. "The Role of AI and Robotics in the Fourth Industrial Revolution," Medium, 12 June 2019, accessed 6 January 2021, https://medium.com/kambria-network/the-role-of-ai-and-robotics-in-the-fourth-industrial-revolution-81a749f66740.
  4. Abby Norman, "NASA: AI Will Lead the Future of Space Exploration," Futurism, 27 June 2017, accessed 6 January 2021, https://futurism.com/nasa-ai-will-lead-the-future-of-space-exploration.
  5. U.S. Army Training and Doctrine Command (TRADOC) Pamphlet 525-3-1, The U.S. Army in Multi-Domain Operations 2028 (Washington, DC: TRADOC, December 2018), iii; see Sam H. Kriegler, Artificial Intelligence Guided Battle Management: Enabling Convergence in Multi-Domain Operations (monograph, Fort Leavenworth, KS: U.S. Army School of Advanced Military Studies, 2020). This monograph provides an in-depth examination of artificial intelligence (AI) and multi-domain operations.
  6. Kalev Leetaro, "From Infallible Computers to Infallible AI & Data As Truth," Forbes (website), 20 August 2019, https://www.forbes.com/sites/kalevleetaru/2019/08/20/from-infallible-computers-to-infallible-ai-data-as-truth/?sh=50c630563182.
  7. Alan Baldwin, "How Gary Kasparov's Defeat to IBM's Deep Blue Supercomputer Incited a New Era of Artificial Intelligence," The Independent, 12 April 2020, accessed 6 January 2021, https://www.independent.co.uk/sport/general/chess-garry-kasparov-deep-blue-ibm-supercomputer-artificial-intelligence-a9461401.html.
  8. Ibid.
  9. Steven Borowiec, "Google's AlphaGO AI Defeats Human In First Game of Go contest," The Guardian (website), 9 March 2016, accessed 6 January 2021, https://www.theguardian.com/technology/2016/mar/09/google-deepmind-alphago-ai-defeats-human-lee-sedol-first-game-go-contest.
  10. Bernard Marr, "Artificial Intelligence Masters the Game of Poke—What Does That Mean For Humans?," Forbes (website), 13 September 2019, accessed 6 January 2021, https://www.forbes.com/sites/bernardmarr/2019/09/13/artificial-intelligence-masters-the-game-of-poker--what-does-that-mean-for-humans/#70d89cdb5f9e.
  11. Steven Borowiec, "Google's AlphaGO AI Defeats Human In First Game of Go Contest," The Guardian (website), 9 March 2016, accessed 6 January 2021, https://www.theguardian.com/technology/2016/mar/09/google-deepmind-alphago-ai-defeats-human-lee-sedol-first-game-go-contest.
  12. Carl Engelking, "Artificial Intelligence Just Mastered Go, But One Game Still Gives AI Trouble," Discover Magazine, 27 January 2016, https://www.discovermagazine.com/technology/artificial-intelligence-just-mastered-go-but-one-game-still-gives-ai-trouble/.
  13. Christiana Reedy, “Kurzweil Claims the AI Singularity Will Happen by 2045,” Futurism, 5 October 2017, accessed 6 January 2021, https://futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045 .
  14. Greg Milicic, "The Existential Threat of the Pending Singularity," Medium, 13 February 2019, https://medium.com/@gmilicic.
  15. Ibid.
  16. John W. Brock, Why the United States Must Adopt Lethal Autonomous Weapon Systems (monograph, Fort Leavenworth, KS: U.S. Army School of Advanced Military Studies, 2017).
  17. Arjun Kharpal, "Stephen Hawking says A.I. Could Be ‘Worst Event In Human History,'" CNBC, 6 November 2017, accessed 6 January 2021, https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.html.
  18. Maureen Dowd, "Elon Musk's Billion Dollar Crusade to Stop the AI Apocalypse," Vanity Fair (website), April 2017, accessed 6 January 2021, https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x.
  19. Alan Baldwin, "How Gary Kasparov's Defeat to IBM's Deep Blue Supercomputer Incited a New Era for Artificial Intelligence," The Independent, 12 April 2020, accessed 6 January 2021, https://www.independent.co.uk/sport/general/chess-garry-kasparov-deep-blue-ibm-supercomputer-artificial-intelligence-a9461401.html.
  20. Chris Perez, "Creepy Facebook Bots Talked to Each Other in A Secret Language," New York Post (website), 1 August 2017, accessed 6 January 2021, https://nypost.com/2017/08/01/creepy-facebook-bots-talked-to-each-other-in-a-secret-language/.
  21. Robert Monday, Artificial Intelligence: Expected to Win; Ready to Fail (monograph, Fort Leavenworth, KS: U.S. Army School of Advanced Military Studies, 2018).
  22. Colin Clark, "Air Combat Commander Doesn't Trust Project Maven's Intelligence-Yet," Breaking Defense, 21 August 2019, accessed 6 January 2021, https://breakingdefense.com/2019/08/air-combat-commander-doesnt-trust-project-mavens-artificial-intelligence-yet/.
  23. Joshua Elliott, "Causal Exploration of Complex Operational Environments," Defense Advanced Research Projects Agency (DARPA), 20 April 2017, accessed 6 January 2021, https://www.darpa.mil/program/causal-exploration.
  24. Matt Turek, "Explainable Artificial Intelligence," DARPA, accessed 6 January 2021, https://www.darpa.mil/program/explainable-artificial-intelligence.
  25. Jason Murdock, "What Is Project Maven? Google Urged to Abandon U.S. Military Drone Program," Newsweek (website), 15 May 2018, accessed 6 January 2021, https://www.newsweek.com/project-maven-google-urged-abandon-work-military-drone-program-926800.
  26. Army Doctrine Publication 6-0, Mission Command (Washington, DC: U.S. Government Publishing Office, 2014 [obsolete]).
  27. Dan G. Cox, "Mission Command and Complexity on the Battlefield," chap. 8 in Mission Command in the 21st Century: Empowering to Win in a Complex World, ed. Nathan Finney and Jonathan Klug (Fort Leavenworth, KS: Army University Press, 2016).
  28. Tonya Mohn, "Most Americans Still Afraid to Ride in Self-Driving Cars," Forbes (website), 28 March 2019, accessed 6 January 2021, https://www.forbes.com/sites/tanyamohn/2019/03/28/most-americans-still-afraid-to-ride-in-self-driving-cars/#780c583532da.
  29. Sydney J. Freedberg Jr., "Army Futures Command Wants YOU (To Innovate)," Breaking Defense, 23 October 2018, accessed 6 January 2021, https://breakingdefense.com/2018/10/army-futures-command-wants-you-to-innovate/.
  30. Elsa B. Kania, Battlefield Singularity: Artificial Intelligence, Military Revolution, and China's Future Military Power (Washington, DC: Center for New American Security, November 2017), https://www.cnas.org/publications/reports/battlefield-singularity-artificial-intelligence-military-revolution-and-chinas-future-military-power.
  31. E. Jerome Hilliard, Military Innovation through Lethal Logistical Capabilities (monograph, Fort Leavenworth, KS: U.S. Army School of Advanced Military Studies, 2018).
  32. Michael D. Pritchard, Artificial Intelligence and Operational Art: The Element of Grip (monograph, Fort Leavenworth, KS: U.S. Army School of Advanced Military Studies, 2018), 31.
  33. William A. Branch, Artificial Intelligence and Operational-Level Planning: An Emergent Convergence (monograph, Fort Leavenworth, KS: U.S. Army School of Advanced Military Studies, 2018), 27.
  34. Colin Sattler, Aviation Artificial Intelligence: How Will It Fare in the Multi-Domain Environment? (monograph, Fort Leavenworth, KS: U.S. Army School of Advanced Military Studies, 2018).
  35. Elsa B. Kania, “China’s Quest for an AI Revolution in Warfare: The PLA’s Trajectory from Informatized to ‘Intelligentized’ Warfare,” The Strategy Bridge, 8 June 2017, accessed 6 January 2021, https:// thestrategybridge.org/the-bridge/2017/6/8/-chinas-quest-for-an-airevolution-in-warfare.
  36. Chris Cornille, “Finding Artificial Intelligence Money in the Fiscal 2020 Budget,” Bloomberg, 28 March 2019, accessed 6 January 2021, https://about.bgov.com/news/finding-artificial-intelligence-money-fiscal-2020-budget/.
  37. Ian R. Morris, Artificial Intelligence and Human-Agent Teaming: The Future of Large-Scale Combat Operations (monograph, Fort Leavenworth, KS: U.S. Army School of Advanced Military Studies, 2020).
  38. Brian Fung, "How Chinese Huawei Took the Lead Over U.S. Companies in 5G Technology," Washington Post (website), 10 April 2019, accessed 6 January 2021, https://www.washingtonpost.com/technology/2019/04/10/us-spat-with-huawei-explained/.
  39. Robert Farley, "With TTP's Demise, What Happens to U.S. Intellectual Property Rights Abroad?," The Diplomat, 30 November 2016, accessed 6 January 2021, https://thediplomat.com/2016/11/with-tpps-demise-what-happens-to-us-intellectual-property-rights-abroad/.
  40. Paola Suibacchi, "The AIIB Is a Threat to Global Economic Governance," Foreign Policy (website), 31 March 2015, accessed 6 January 2021, https://foreignpolicy.com/2015/03/31/the-aiib-is-a-threat-to-global-economic-governance-china/.
  41. Ryan Morgan, "U.S. DOI Abruptly Grounds All China-Made Drones Over Spying Concerns," American Military News, 31 October 2019, accessed 6 January 2021, https://americanmilitarynews.com/2019/10/us-doi-abruptly-grounds-all-china-made-drones-over-spying-concerns/.
  42. Anna Mitchell and Larry Diamond, "China's Surveillance State Should Scare Everyone," The Atlantic (website), 2 February 2016, accessed 6 January 2021, https://www.theatlantic.com/international/archive/2018/02/china-surveillance/552203/.
  43. Tajha Chappellet-Lanier, "Pentagon's Project Maven Responds to Criticism: There Will Be Those Who Will Partner with Us," Fedscoop, 1 May 2018, accessed 6 January 2021, https://thebulletin.org/2017/12/project-maven-brings-ai-to-the-fight-against-isis/.
  44. Glenn Thrush and Alan Rappeport, "Trump Cautious on China Inquiry over Intellectual Property Theft," New York Times (website), 12 August 2017, accessed 6 January 2021, https://www.nytimes.com/2017/08/12/us/trump-cautious-on-china-inquiry-over-intellectual-property-theft.html.
  45. Richard Sisk, "Google's Work with China Eroding U.S. Military Advantage, Dunford Says," Military.com, 21 March 2019 accessed 6 January 2021, https://www.military.com/defensetech/2019/03/21/googles-work-china-eroding-us-military-advantage-dunford-says.html.
  46. Freedberg, "Army Futures Command Wants YOU (To Innovate)."
  47. Ibid.
  48. Matt Obrien, "Pentagon Adopts New Ethical Principles for Using AI in War," The Associated Press, 24 February 2020, https://apnews.com/article/73df704904522f5a66a92bc5c4df8846.

 

Dan G. Cox, PhD, is a professor of political science at the U.S. Army School of Advanced Military Studies. He serves as a liaison between that school and the Defense Advanced Research Program Agency's Causal Exploration program. Cox has also served in the NATO Partnership for Peace Program helping to improve the professional military education system of the military of the Republic of Armenia. He has been the reviews editor for Special Operations Journal, and he is on the board of executives for the Special Operations Research Association.

 

Back to Top

May-June 2021