Artificial Intelligence as a Combat Multiplier
Using AI to Unburden Army Staffs
Maj. Michael Zequeira, U.S. Army
Download the PDF
With the advancements in artificial intelligence (AI) prevalent in news stories across the world, military practitioners, academics, and policymakers alike wonder what role these technological advancements will play in warfare. What many of these people fail to realize is that the U.S. military has been using forms of AI for decades. While truly reliable autonomous weapons are still far from deployment ready, there is a role within offensive and defensive operations where AI can play a role as critical combat multiplier: unburdening planning staffs to provide more focused efforts on the uniquely human aspects of operational planning.
AI Is Already in Use in the U.S. Army
AI has become a catch-all phrase for any machine behavior replicating human tasks, but one must be more specific to truly assess implications of AI on the battlefield or in society. Two subtypes of AI include machine learning and deep learning. Machine learning is where a computer learns and improves by processing data without being told to do so and uses statistics to conduct probability analysis, in some cases making predictions.1 Deep learning is a subfield of machine learning that allows for processing large amounts of data to find relationships and patterns humans may not be able to detect.2 While deep learning is harder to scale due to its complexity, machine learning is already commonplace in Army systems. One of these systems includes the Phased Array Tracking Radar to Intercept on Target (PATRIOT), which uses a complex network of computers and algorithms to track incoming objects, classify them as threat or friendly, and launch surface-to-air missiles.3 The Army is also currently investing in other AI tools like Project Maven, “a tool that could process drone footage quickly and in a useful way.”4
Other less well-known systems, such as the Tactical Intelligence Targeting Access Node are currently under development for deployment with AI and machine-learning capabilities.5 In addition to new systems, others currently fielded by the Army employ basic machine learning such as the Intelligence Fusion Server, which performs functions like correlation, association, and normalization automatically when programmed correctly by operators. Machine learning use is not new to the Army.
However, concerns about more advanced AI are growing, particularly its ability to apply basic warfare concepts such as proportionality and discrimination. Previous catastrophes with defensive AI-enabled systems, such as PATRIOT batteries firing on friendly aerial assets during the Gulf War, give pause to the consideration of new smart, lethal technology to bolster defenses.6 Similarly, Israel’s use of AI systems on the offense during the current war in Gaza highlights serious ethical concerns. Two AI systems in use by Israel, “Lavender” and “Daddy,” may be contributing to higher rates of civilian casualties than observers are comfortable with.7 Both examples show that lethal AI is still not ready for full deployment within the Army. However, there are ways the Army can leverage technological advancements to increase its operational effectiveness during offensive and defensive operations. The Army can do this without risking unnecessary civilian casualties or fratricide.
Unburdening Intelligence Staffs with AI
Most Army operational planning begins with the vaunted military decision-making process (MDMP). Embedded within that process is step two, mission analysis, and one of its substeps, intelligence preparation of the battlefield (IPB, now known as intelligence preparation of the environment, or IPOE).8 While these processes are revered for their ability to plan effective operations, a common theme among combined training center rotations is the constant fight for time to complete these processes and their associated products.9 These processes are typically conducted for both offensive and defensive operations and are an opportunity to leverage AI to provide a technological boost to military planners.
Mission analysis consists heavily of IPOE. IPOE has two additional substeps that are crucial to its final substep in determining threat courses of action: describing the environmental effects on operations and evaluating the threat.10 In today’s digital age, both of these substeps are heavily reliant on data that exists across myriad government and commercial sites. With respect to environmental effects on the battlefield, commanders and intelligence professionals alike will usually associate this with two products: the weather effects matrix and modified combined obstacle overlay (MCOO).
The data already exists across commercial and government sites to build large language models and feed generative AI for both weather and terrain analysis. Once these models are built and deployed on government systems, intelligence staffs could quickly generate weather predictions and effects matrices based on time of year and location, including its effects on forces and equipment as opposed to manually updating a PowerPoint slide. Additionally, intelligence analysts could perform a similar task by prompting an AI model to generate an MCOO in a matter of seconds, quickly identifying possible routes for an attack during an offensive operation or an enemy’s likely avenue of approach during a defense. The Netherlands Organization for Applied Scientific Research showed how AI could create tactical spatial objects of different tiers, including everything from foundational terrain data to supporting specific courses of actions complete with graphic control measures on products.11 This technology is already in use by some units with AI models analyzing satellite imagery to identify terrain features and suggest targets.12 The key is scaling this technology to make it available to staffs across the Army.
When analyzing the threat, the intelligence staff must coalesce data from multiple sources on threat orders of battle, weapons systems capabilities, and countless other data points that provide objective data to the commander. This data exists across multiple platforms, including but not limited to the Intelligence Knowledge Network, the Operation Environment Data Integration Network, and additional sources. This data could be fed into a Department of Defense large language model hosted on the secure internet protocol router (SIPR) network. Intelligence staffs could then quickly query all the data they need on enemy units and equipment in one fell swoop as opposed to individually researching specific pieces of equipment associated with an enemy unit.
While it may be tempting to suggest that this technology is not needed because these are relatively simple tasks, what leaders cannot ignore is the time it would buy back for the intelligence staff. Any time not spent coalescing data into products that an AI model could quickly query and generate is time that staff could spend developing more complete enemy courses of action with more branches and sequels. Additionally, quickly turning products like a MCOO or terrain effects matrix would give more time to other warfighting functions during MDMP. Identifying likely avenues of approach earlier in preparation for a defense would allow more time for protection and maneuver units to conduct more thorough engagement area development. In the offense, it could allow units to request and receive necessary support earlier, such as more mine-clearing capabilities, to best support their attack.
Buying Time for the Rest of the Staff
AI’s ability to decrease the burden of the MDMP is not isolated to the intelligence staff. Every other section in the Army staff has products associated with MDMP that are time-consuming to produce.13 These staff sections are also responsible for tracking large amounts of data, such as logistics status reports, operational readiness rates, personnel status, and other administrative reports. These reports are critical throughout MDMP, as the information they provide informs the commander of his or her available combat power. These reports are crucial to step three of MDMP, course of action development.14
Large language models, a subset of AI particularly adept at producing text-based products, could assist in developing products and reports to enable the commander’s decision-making.15 While this is important work because it drives commander decision-making, collecting and displaying this data manually is a time-consuming process that can be mundane. Peter Thiel, a founder and chairman of Palantir, argues that this is exactly the kind of task where AI could excel in enabling the Army.16
Like the advantages of using AI to support the intelligence section, the advantage here is the time gained by the staff for planning that is uniquely human. Herwin W. Meerveld et al. describe implicit knowledge, cognitive flexibility, and creativity as all uniquely human strengths.17 Leveraging AI in processes that do not require these skillsets allows military planners to apply these human traits to parts of MDMP that do require those skillsets, like course of action development.18 In addition to having more time to develop more complete courses of action that leverage a staffs’ collective experience, the staffs will also be able to give more time to subordinate units by expediting arrival at the final step of MDMP, orders production.19 Getting an order in the hands of subordinate units quicker will lead to more time for preparations and rehearsals, and the freedom to change the plan more rapidly as friendly or enemy conditions change. More time for rehearsals allows for a better preparation for actions on the objective in an attack, better engagement area development in the defense, and more opportunities to identify friction in any plan.
The Benefits and Risks
The benefit of using AI to supplement these processes is time given back to the unit, which can then be applied to myriad tasks that need to be completed in preparation for offensive and defensive operations. Additionally, the cost of training a large language model for deployment on tactical networks is relatively inexpensive, especially when compared to other DOD programs. OpenAI’s GPT-3 costs around $4 million to train, plus the cost of the graphic processing units needed to run the large language model training.20 Compare this to the $34 million cost of the Army Intelligence Data Platform, and one recognizes the large benefits that the Army can give planning staffs for a relatively small sum.21
There are risks associated with deploying AI across Army staffs to enable faster MDMP. The first risk is as the models proliferate across staffs, planners can begin to trust the models too much, becoming over reliant on them. During a 2011 experiment, subjects continuously followed a robot to a fire exit during a simulated evacuation, despite exit signs being clearly marked in the opposite direction of where the robot led them.22 This risk is what staff officers and noncommissioned officers must guard against. Like reports from human subordinates, staffs must trust but verify the data they receive. This can be practiced during the many repetitions of training that occur prior to a scheduled deployment or during a combined training center or Warfighter exercise that is the culminating event for a staff. Having multiple iterations where the staff builds confidence in their AI assistants will give them the confidence needed during critical moments and the knowledge of what data to verify to ensure the models are performing as required.
The second risk is an atrophying of skills needed to effectively conduct MDMP. Given the large amounts of power required to run large language models and the expeditionary conditions where the Army sometimes deploys, there is a risk of not having the necessary physical architecture to run a large language model as well as other command node systems. This risk must be mitigated with training leading up to the culminating event. Like training on primary, alternate, contingency, and emergency communications plans, staffs must be able to complete MDMP with and without their technological assistants. However, AI models could still assist staffs in sanctuary to enable faster turns on orders for their downtrace units. As hardware technology advances, these AI assistants can be deployed to lower-level echelons once edge node computing reaches the maturity to reliably operate in a denied, degraded, interdicted, or limited environment. The longer the Army waits to adopt AI assistants at higher echelons, the longer it will take for requirements to be developed to eventually see these capabilities matriculate down to the tactical edge.
Conclusion
Both offensive and defensive operations rely on Army planning processes to posture units for success. Historically, these processes produce good results, but staffs often fight for more time to execute them to their fullest and generate the associated products for each step.23 Developing AI that can quickly generate many of the associated products would unburden staffs and allow for more focused preparation on the conduct of operations.
Large AI assistants could ingest much more data than an individual and could raise issues a human staff missed. Leaders at echelon can use training to guard against the risk of false security and skill atrophy throughout the training cycle before a deployment. Additionally, deploying AI systems in a planning capacity would benefit units in the offense and defense, without risking unnecessary loss of life. The Army has the data and the ability to begin developing these staff assistant models to benefit the force. The earlier it does so, the earlier it will see defense-oriented AI mature and begin to lower the risk of applying it for more kinetic purposes.
Notes
- Tom Taulli, Artificial Intelligence Basics: A Non-Technical Introduction (Monrovia, CA: Apress, 2019), 41–42.
- Ibid., 71.
- David Axe, “That Time an Air Force F-16 and Army Missile Battery Fought Each Other,” War Is Boring, 5 July 2014, https://medium.com/war-is-boring/that-time-an-air-force-f-16-and-an-army-missile-battery-fought-each-other-bb89d7d03b7d.
- Kelsey D. Atherton, “Targeting the Future of the DOD’s Controversial Project Maven Initiative,” C4ISRNET, 27 July 2018, https://www.c4isrnet.com/it-networks/2018/07/27/targeting-the-future-of-the-dods-controversial-project-maven-initiative/.
- “TITAN Brings Together Systems for Next Generation Intelligence Capabilities,” Program Executive Office–Intelligence, Electronic Warfare and Sensors, 27 September 2021, https://peoiews.army.mil/2021/09/27/titan-brings-together-systems-for-next-generation-intelligence-capabilities/.
- Paul Scharre, Army of None: Autonomous Weapons and the Future of War (New York: W. W. Norton, 2018), 137–45.
- “How Israel Is Using ‘Lavender’ and ‘Daddy’ to Identify 37,000 Hamas Operatives,” Economic Times (website), 9 April 2024, https://economictimes.indiatimes.com/news/defence/how-israel-is-using-lavender-and-daddy-to-identify-37000-hamas-operatives/articleshow/109155124.cms.
- Field Manual (FM) 5-0, Planning and Orders Production (Washington, DC: U.S. Government Publishing Office [GPO], 2022), 5-9.
- Rex Howry, Caleb J. Goble, and Matthew S. Lewis, “Fighting for Time at JRTC,” Infantry Magazine 110, no. 1 (Spring 2021): 9–15, https://www.moore.army.mil/infantry/magazine/issues/2021/Spring/pdf/5GobleJRTC.pdf.
- Army Techniques Publication 2-01.3, Intelligence Preparation of the Battlefield (Washington, DC: U.S. GPO, 2019).
- Nico M. de Reus, Philip J. M. Kerbusch, and Maarten P. D. Schadd, “Geospatial Analysis for Machine Learning in Tactical Decision Support,” in Towards Training and Decision Support for Complex Multi-Domain Operations (Brussels: NATO Science and Technology Organization, 2021), https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-MSG-184/MP-MSG-184-08.pdf.
- “Artificial Intelligence Is Already Changing How the US Wages War,” Bloomberg Law, 28 February 2024, https://news.bloomberglaw.com/insurance/artificial-intelligence-is-already-changing-how-the-us-wages-war.
- FM 5-0, Planning and Orders Production, 5-22.
- Ibid., 5-23.
- Herwin W. Meerveld et al., “The Irresponsibility of Not Using AI in the Military,” Ethics and Information Technology 25, no. 14 (2023), https://doi.org/10.1007/s10676-023-09683-0.
- Anthony King, “AI at War,” review of Four Battlegrounds: Power in the Age of Artificial Intelligence, by Paul Scharre, War on the Rocks, 27 April 2023, https://warontherocks.com/2023/04/ai-at-war/.
- Meerveld et al., “The Irresponsibility of Not Using AI in the Military.”
- Ibid.
- FM 5-0, Planning and Orders Production, 5-3.
- Jonathan Vanian and Kif Leswing, “ChatGPT and Generative AI Are Booming, But the Costs Can Be Extraordinary,” CNBC, last updated 17 April 2023, https://www.cnbc.com/2023/03/13/chatgpt-and-generative-ai-are-booming-but-at-a-very-expensive-price.html.
- Matthew Beinart, “Palantir Receives $34 Million Software Order for Army Intelligence Data Platform,” Defense Daily, 22 February 2022, https://www.defensedaily.com/palantir-receives-34-million-software-order-for-army-intelligence-data-platform/army/.
- Ayanna Howard, “In AI We Trust—Too Much?,” MIT Sloan Management Review (website), 26 March 2024, https://sloanreview.mit.edu/article/in-ai-we-trust-too-much/.
- Howry, Goble, and Lewis , “Fighting for Time at JRTC,” 9–15.
Maj. Michael Zequeira, U.S. Army, is an officer at the 101st Airborne Division in Fort Campbell, Kentucky, and serves as the G-2 operations officer in charge. He holds a BS from Anderson University (South Carolina), an MA from the University of Arizona, and an MMAS from the Command and General Staff College at Army University. His previous assignments include 4th Infantry Division, 10th Mountain Division, and Army Test and Evaluation Command, and deployments to Afghanistan, Iraq, Kuwait, and multiple eastern European countries. Zequeira is also an alumnus of the Information Advantage Scholars Program at the Command and General Staff School in Fort Leavenworth, Kansas.
Back to Top