Assessing the Modern Fight

 

Lt. Col. Mitchell Payne, U.S. Army

 

Download the PDF Download the PDF

 
tactical operations center near Fort Greely, Alaska

One of the most important aspects of operating an automobile is the driver’s ability to look through the front windshield to see and understand where the vehicle is going. The driver’s ability to use the side and rearview mirrors to gain situational awareness is important, but the critical aspect of driving is the ability to look forward to see where one is headed. This ability to look ahead allows the driver to adjust behavior—to speed up, slow down, or change lanes—to arrive at the intended destination.

Leading an organization and driving a car are two categorically different topics, with leadership being—in several orders of magnitude—a far more difficult task. This analogy is admittedly simplistic; most military vehicles have more than one person. But to move from the simple analogy to the complex operation, both driving a vehicle and organizational leadership require an awareness of the environment, a forward-looking vision, and a clear understanding of the destination at hand to be successful. The military assessment process is how staff and commanders achieve a shared understanding of their surrounding environment and their way forward to reach the necessary military end state.

Every single commanding general across multiple Warfighter exercises (WFXs) and mission command training sessions has highlighted the importance of getting assessments right. Sadly, this author has personally heard every single one of those commanding generals also express their concerns and frustration that their organizations are not getting the assessment process “right.” The unanimous expression of concern across multiple general officers suggests either a gap in organizational assessment doctrine or a lack of clarity in how to apply assessment doctrine.

One reason why commanding generals may express frustration with the assessment process is that all too often the process narrowly focuses on the enemy battle damage assessment (BDA). When an organization limits the assessment process to focus entirely on the effects the organization is having on the enemy, subsequent assessments cannot inform the commander of their ability to look ahead. Focusing on BDA is like driving forward by only looking in the rear-view mirror; operational assessments must be forward-looking to inform the commander’s ability to visualize, describe, and direct the operation.1

This article attempts to bridge the gap between the doctrine on organizational assessments and the friction arising from applying the doctrine during large-scale combat operations (LSCO). Understanding the history and doctrine of operational assessments may help us to understand the problems and friction of assessments in a new light and may suggest tangible actions that divisions and corps can take to use the assessment process to inform the commander’s visualization and rapid decision-making process.

History and Doctrine of Organizational Assessments

The U.S. military doctrine supporting operational assessments—both Army and joint doctrine—provides a robust framework to understand and apply assessments to organizations. Admittedly, however, there may be a disconnect between doctrine and the application of organizational assessments. Recent doctrinal publications address some of this disconnect. Chapter 8 of the recently published Field Manual 5-0, Planning and Orders Production, discusses the organizational assessment process. The doctrinal update ties the assessment processes to all steps of the operations process.2 Despite this helpful update, however, evidence from multiple recent WFXs shows that at both the division and the corps level, there is still a gap in the application of our doctrine.

Assessments are such a fundamental aspect of warfare that often they take place informally without any fanfare. Almost every commander asks a simple question like “How are we doing?” or “Are we winning?” when returning to their command posts. Yet the history behind the formation of operational assessments is a little more complicated.

The Vietnam War offered the first true systematization of operational assessments. Then Secretary of Defense Robert McNamara came from a background at Harvard Business School, the Army Air Forces Statistical Control Division in World War II, and Ford Motor Company.3 This quantitative-focused background helped formalize the military assessment process by emphasizing numerical metrics—munitions expended, body counts, hamlets pacified—as a definition of “success” in Vietnam from 1966 to 1968.4

In the wake of Vietnam, the military focus shifted to large-scale combat against the Soviet military for the remainder of the Cold War. This focus was marked by a staccato of numerous small-scale or limited military operations. The vacillation between quantitative and qualitative assessments to some degree also reflected the shifting focus.

The onset of counterinsurgency operations in the post-9/11 era saw an initial return to numerically based assessments. Many senior leaders may remember the broad swath of “stoplight” charts and heaps of statistics assiduously tracking the number and progression of each member of the Iraqi and Afghan military forces.5 In hindsight, the comfort of forward progressing quantitative numbers buoyed false confidence in the qualitative assessments of such partnered units, as evidenced in the wholesale surrender of Afghan military forces in the wake of the Taliban resurgence in the summer of 2021.6

The mid-2010s, however, saw a holistic reexamination of Army doctrine and strategic capabilities. After more than a decade of small-scale counterinsurgency operations, the Army returned to LSCO. The return to a decisive-action focus at the combat training centers and WFXs forced training audiences to reckon with a near-peer and free-thinking enemy.

This short history lesson in organizational assessments is important because it shows within the assessment process the military has a foundational bias toward quantifiable metrics. As military leaders, we think that if we can somehow assign a specific number or percentage to an assessment, then that quantifiable is inherently better or more scientific. In terms of the operational assessment process, however, the ghosts of data-driven assessments raised their visage again. Commanders and, more importantly, their staffs became once again fixated on the quantifiable aspects of assessments that feed directly into the division targeting process.7

The implications of this numeric fixation often mean leaders relegate the assessment process to the intelligence and fires warfighting functions (WfF), seeing organizational assessments as simply a means of feeding the next targeting cycle. While integrating assessments into targeting is extremely important, unfortunately, it may come at the cost of a broader qualitative assessment and understanding of the organization’s ability to achieve the operational end state. If an organization is only looking rearward at what effects it has on an enemy force, it will not look forward and see or adjust to the curves in the road ahead.

Defining Assessments

Surprisingly, the doctrine on assessments does not explicitly focus on quantifiable metrics. Rather, U.S. Army doctrine defines an assessment as “the determination of the progress toward accomplishing a task, creating a condition, or achieving an objective.”8 Inherent in this definition is an understanding that assessments are tied to mission objectives. The joint doctrine makes it even more clear. Commanders use the assessment process to “assess the progress of the operation toward the desired end state.”9

Army doctrine also notes the complexity of getting the assessment process correct. “There is no single way to conduct assessments. Every situation has its own distinctive challenges, making every assessment unique.”10 The nebulous nature of organizational assessment helps one to understand why assessments have traditionally swung between a hard focus on quantitative evaluation and qualitative assessment.11 Doctrinally, assessments involve three main activities—monitoring, evaluating, and recommending.12 Another friction point occurs when the assessment team fails to balance all three critical aspects. Often, organizations will default to equating assessments with evaluations at the cost of monitoring and recommendations.

Finally, the leaders in the assessment process must have a solid understanding of the intended audience (internally, higher, subordinate, and adjacent), how those agents receive information, and what the intended end state or objective is for each of those audiences.13 These complexities contribute to a “failure cycle” in which the lack of organizational advocacy and command disinterest converge with a poorly defined assessment process and inadequate assessment products.14 Despite these complexities, one way to avoid assessment failure and help simplify the assessment process is to break it down into two broad categories: combat assessments and operational assessments.

Combat Assessments

The term “assessment” has become synonymous with a singular focus on assessing the effects that one has achieved on the enemy at hand. “How many of the enemy did we kill?” “What effect does that have on the enemy?” “Do I have to reengage the enemy?” These questions are critically important to understand how a unit’s actions impact the operating environment and comprise the elements of combat assessments.15 Combat assessments are presented in doctrine as a subset of the targeting process and include munition effectiveness and reengagement recommendations. These two elements are germane to the targeting discussion but may have little impact on the overall assessment discussion outside of the specific targeting decisions. Far more critical to the larger assessment process is the first component of combat assessments—BDA.

Simply stated, BDA is how organizations understand what they did to the enemy. BDA is, at its most fundamental level, a collection of individual data points. “We destroyed XX pieces of long-range artillery.” Doctrinally speaking, BDA “includes known or estimated enemy unit strengths, degraded, neutralized, or destroyed enemy weapon systems, and all known captured, wounded, or killed enemy personnel during the reporting period.”16 But each one of those components is merely an individual data point—the simple “what.” It is only when one adds a layer of analysis—the “so what”—can people connect and weave those specific data points into a coherent narrative. Therefore, doctrinally speaking, BDA is primarily an intelligence responsibility.17

Once again, however, the quantitative/data-focused nature of BDA combines with the military’s bias toward numerical assessments. This all too often leads to the presentation of data apart from the analysis. Moreover, the focus on ensuring 100 percent fidelity with data accuracy may come at the cost of analytical capacity about what it means. It is immaterial to know how many of one specific weapon system we have effectively destroyed if one is unable to put that data in context. The question of “what” is only important as it feeds the “so what,” but the “so what” is only marginally important unless it allows the intelligence analyst to predict what the enemy will do.18 The problem with using BDA as the singular metric for organizational assessments is that staff members will exclusively focus on getting BDA correct at the cost of analyzing what the data means seventy-two to ninety-six hours from now. The role of the intelligence team in combat assessments is to capture the BDA on the enemy and use that BDA to predict the enemy’s courses of action in an event template and matrix.19 The event template is the singularly most important document the intelligence team produces; it leads us directly to the other half of organizational assessments: operational assessments.

Operational Assessments

Returning to the doctrinal definition bears reemphasizing that the assessment process is inherently tied to operational end states.20 At the most basic level, operational assessments simply ask, “Are we on track to achieve our end state?” Continuing with the driving analogy, if BDA is like looking in the side and rearview mirrors, then operational assessments are akin to looking through the windshield and asking, “Am I on track to get where I want to go?” This question of intended destination—or end state—draws a parallel to operational art.

One of the main functions of operational art is to ensure that tactical actions occur under the most advantageous conditions possible.21 Operational art has many elements, but the first is understanding the end state and desired future conditions. Other pertinent intersections between operational art and the operational assessment process are decisive points, tempo, operational reach, and risk. Each of those aspects, as well as end states and conditions, requires continuous monitoring (i.e., assessing) to evaluate progress and changing conditions in the operational environment that may result in differentiated end states.

Modern-Fight-fig1

Tying operational assessments to operational art suggests two distinct points. First, the assessment process must be fully integrated across all WfFs.22 The assessment process cannot be relegated to one or two WfF representatives with an operations research and systems analysis (ORSA) officer in tow. To achieve the desired end state, organizations must apply all elements of combat power toward achieving this goal. If we have observed anything from recent operations in Ukraine, the most aggressive maneuver plan in the world may become irrelevant if the maneuver forces outrun their logistical capabilities. The operational assessment process cannot be relegated to the intelligence and fires community—it must involve all other WfFs.

Second, the operational assessment process must take place in a larger context. The organization’s assessment process must feed some type of plans update brief to the commander to reframe (as necessary) the ground maneuver plan. Failure to do so results in wasted staff effort and truncated planning timelines for staff and subordinates. All too often during WFXs, observer coach/trainers have witnessed a division targeting decision board devolve into a wargaming session because the ground maneuver plan had changed so much such that the original end state was unachievable. As the staff leaders assess the operation and determine that decisive points are (or are not) able to be achieved given the operating environment, someone must tell the commander.23 Tying the assessment working group within a critical path that feeds the planning and targeting process is a critical step to ensure the staff can enable the commander to visualize, describe, and direct the organization.24

Assessment Working Group Framework—A Way

If one accepts the premise that the assessment process must be integrated across WfFs and within the larger planning process, then one can extrapolate further implications and conclusions. The division or corps assessment working group (AWG) is the meeting where, by doctrine, those WfF staff representatives gain a shared understanding and provide pertinent information for the commander.25 Army doctrine also outlines six general questions for members of the AWG to discuss within the AWG.26 Figure 1 lists the general assessment questions.

Here, however, experience indicates that there may be a gap in the assessment doctrine. While the general assessment questions are helpful in broadly shaping the staff’s understanding, junior staff members may find them too broad. The lack of specificity within those general questions may not lead to the desired level of shared understanding across all WfF elements, and some degree may contribute to the overall frustration with the assessment process. Subsequent assessment doctrine on the AWG also suggests multiple analytical tools such as graphs, charts, and pivot tables—all of which still approach assessments from a quantitatively biased perspective.27

A slight departure from doctrine may be beneficial to add a greater degree of specificity by WfF to the AWG process. Figure 2 offers an AWG meeting framework as “a way” to frame the problem of organizational assessments with a greater level of specificity.

The first column, “WfF” offers an integrated approach to the assessment process and incorporates subordinate unit feedback. The “Inputs” to the meeting are broken down by each WfF and generally consist of their running estimates. The “Assessment” column suggests various questions that each WfF representative must consider and communicate to the group writ large. “Risk” is broken down by risk to force (F) and risk to mission (M). “Outputs” are tangible products that must be updated based on the assessment and risk, all of which drive a shared understanding across all WfFs as to the feasibility of the current and future end states. Finally, the last column, “Feeds,” indicates how the outputs from each WfF logically flows into follow-on meetings such as the plan’s update brief, the targeting working group, and even the protection and sustainment working groups. This figure captures much of the prevailing joint and Army doctrine.28

Modern-Fight-fig2

While this list is in no way meant to capture every single aspect that could be assessed, to some degree it offers an integrated framework to help each WfF understand how it must contribute to the overall operation. Irrespective of what specific questions or assessments each WfF representative asks, the general framework applies as updated intelligence drives adjustments to the maneuver plan. Fires must shape their targeting based on the maneuver plan, which also affects protection, sustainment, and the command-and-control architecture.

Placing the AWG In the Critical Path

The placement and timing of the AWG on the unit’s critical path is another critical factor that bears consideration. The fact that many of the outputs of the AWG in turn feed other critical organizational meetings suggests that the AWG is best served to take place in the early morning periods. This is also supported by the fact that in LSCO, many major operations take place at night; scheduling the AWG in the early morning allows all WfF representatives to gain a shared understanding of the results of the previous night’s operations (on both enemy and friendly units). Figure 3 suggests the placement and timing of the AWG in a unit’s battle rhythm.

Modern-Fight-fig3

To the degree possible, the outputs from each meeting become the inputs to the next meeting. This model introduces the plan’s update brief to the organizational battle rhythm. The intent behind this meeting is to back brief the commander on recommended updates to the maneuver plan, with the meeting output as the approved maneuver plan. Approving the maneuver plan prior to the targeting cycle allows a more focused discussion on how to best use targeting to support maneuver, and results in greater organizational efficiency within the targeting working group and decision boards. Additionally, the introduction of the plan’s update brief in the morning helps focus the commander on thinking about the deep and future fight early in the day before battlefield circulation, giving commanders more time to reflect on their intent and end state prior to the targeting decision board.

Meetings with the commander are annotated with two stars. The meetings highlighted in green are generally aligned and led by operations personnel, whereas the meetings in red represent meetings that are generally led and chaired by the fires community. While every battle rhythm must be adjusted to fit within the context of the specific operation and higher headquarters, the recommended timings are based on multiple observations across multiple WFXs.

In many ways, the planning and operations critical path represents a daily iteration of the military decision-making process. In this framework, the assessment working group is akin to a daily mission analysis, whereby staff and commanders gain a shared understanding of the changes in the operating environment that may cause the organization to deviate from reaching their operational end state. Hence, the placement of the AWG at the start of the daily military decision-making process cycle generates a common logical foundation to resynchronize organizational maneuver planning and integration of enabler assets in support of the updated maneuver plan.

Reducing Friction in Operational Assessments

While this article has highlighted the relevance of qualitative assessments in LSCO, the fact remains that quantitative assessments are also a tool in the organizational assessment process. Whether it is the calculation of the destruction of enemy combat power or the use of a correlation of force and means calculator to determine necessary combat power at a given place and time, quantifiable data adds a degree of scientific methodology to organizational assessments. At the division and higher levels, the proper integration of the ORSA can add a powerful tool to the assessment process, but only if the ORSA is used properly. ORSAs are like a highly calibrated torque wrench—if used properly they add a great deal of value, but a torque wrench can be ruined if used as a hammer. ORSAs can use power statistical analyses to determine relationships between variables, but the organizational assessment process (and operational assessments in particular) cannot be reduced to mere numbers on a spreadsheet. In addition to a bias toward quantitative data, two other biases add to friction in the context of assessments: groupthink and confirmation bias.

Groupthink. Psychologist Irving Janis coined the phrase “groupthink,” recognizing it as a psychological phenomenon where people are too deeply concerned about remaining within a cohesive in-group.29 The in-group’s desire for unanimity overrides their motivation to “realistically appraise alternative courses of action.”30 This is different from “yes-men” who simply tell the commander what they want to hear. In groupthink, everyone wants so much to be a part of the team that the thought of suggesting any alternatives becomes unthinkable. No one wants to rock the boat, so they fail to mention the giant iceberg ahead of them.

Confirmation bias. Confirmation bias is another cognitive trap that all members must actively search for. Confirmation bias occurs when individuals only seek out (whether consciously or unconsciously) the data that builds their one-sided case.31 To some degree, this is the flip side of groupthink in that people come to the table with their preconceived ideas and then look for the data to back them up. One cannot see the icebergs ahead if one does not look for them.

The solution to both groupthink and confirmation bias is for leaders to actively seek out differentiated opinions. Simple techniques such as appointing a staff member as the “Red Team” leader during meetings can help combat these cognitive biases. Leader actions such as simply asking “what are we missing?” before the end of each meeting can have a powerful impact on developing an organizational culture that combats these biases.

Conclusion

Organizational assessments are difficult because LSCO is chaotic. Free-thinking enemies present dilemmas to organizations at every level, and the complexity of knowing when and how to synchronize all elements of combat power in time and space at a decisive point is a daunting task at any echelon. Assessments are hard because combat is hard, but assessments are important because winning is important. Military organizations must put into place systems that integrate organizational assessments across all WfFs and into the proper place and time in the unit’s battle rhythm. Organizations must learn to understand both how combat assessments inform the operating environment and operational assessments describe and shape the future end state. Doing one without the other is in the best case driving recklessly in your environment, and in the worst-case driving the car forward by looking in the rearview mirror.


Notes

  1. Field Manual (FM) 6-0, Commander and Staff Organization and Operations (Washington, DC: U.S. Government Publishing Office [GPO], 16 May 2022), 1-2, accessed 29 September 2022, https://armypubs.army.mil/epubs/DR_pubs/DR_a/ARN35404-FM_6-0-000-WEB-1.pdf.
  2. FM 5-0, Planning and Orders Production (Washington, DC: U.S. GPO, 16 May 2022), para. 8-2–8-4, fig. 8-1, accessed 29 September 2022, https://armypubs.army.mil/epubs/DR_pubs/DR_a/ARN35403-FM_5-0-000-WEB-1.pdf.
  3. Emily Mushen and Jonathan Schroden, Are We Winning? A Brief History of Military Operations Assessments (Arlington, VA: Center for Naval Analyses, 3 September 2014), 2, accessed 29 September 2022, https://www.cna.org/reports/2014/are-we-winning.
  4. Ibid., 5–6.
  5. Lynette Arnhart and Marvin King, “Are We There Yet? Implementing Best Practices in Assessments,” Military Review 98, no. 3 (May-June 2018): 20–29, accessed 29 September 2022, https://www.armyupress.army.mil/Portals/7/military-review/Archives/English/King-Arnhart-Are-We-There.pdf.
  6. Clayton Thomas, Taliban Government in Afghanistan: Background and Issues for Congress, Congressional Research Service (CRS) Report No. R46955 (Washington, DC: CRS, 2 November 2021), accessed 29 September 2022, https://crsreports.congress.gov/product/pdf/R/R46955.
  7. Joe Roach and Clay White, “Assessing the Decisive Action Fight,” Army Press Online Journal 16, no. 26 (2016): 3–4, accessed 14 October 2022, https://www.armyupress.army.mil/Portals/7Army-Press-Online-Journal/documents/16-26-Roach-and-White-24Jun16.pdf.
  8. FM 5-0, Planning and Orders Production, 8-1.
  9. Joint Doctrine Note (JDN) 1-15, Operation Assessment (Washington, DC: U.S. GPO, January 2015), I-1.
  10. Army Doctrine Publication (ADP) 5-0, The Operations Process (Washington, DC: U.S. GPO, 31 July 2019), 5-23, accessed 29 September 2022, https://armypubs.army.mil/epubs/DR_pubs/DR_a/ARN18126-ADP_5-0-000-WEB-3.pdf.
  11. Mushen and Schroden, “Are We Winning?,” 2.
  12. ADP 5-0, The Operations Process, 5-7.
  13. JDN 1-15, Operation Assessment, I-3, fig. I-1.
  14. Jonathan Schroden, “Why Assessments Fail—It’s Not Just the Metrics,” Naval War College Review 64, no. 4 (2011): 8–9, accessed 14 October 2022, https://digital-commons.usnwc.edu/nwc-review/vol64/iss4/8.
  15. Army Techniques Publication (ATP) 3-60, Targeting (Washington, DC: U.S. GPO, May 2015), 2-14–2-16, accessed 29 September 2022, https://armypubs.army.mil/epubs/DR_pubs/DR_a/pdf/web/atp3_60.pdf.
  16. Ibid., 2-14.
  17. Ibid.
  18. ATP 2-01.3, Intelligence Preparation of the Battlefield (Washington, DC: U.S. GPO, November 2014), 6-14, accessed 29 September 2022, https://armypubs.army.mil/epubs/DR_pubs/DR_a/ARN31379-ATP_2-01.3-001-WEB-4.pdf.
  19. Ibid.
  20. FM 5-0, The Operations Process, 8-1.
  21. ADP 3-0, Operations (Washington, DC: U.S. GPO, 31 July 2019), 2-1, accessed 29 September 2022, https://armypubs.army.mil/epubs/DR_pubs/DR_a/ARN18010-ADP_3-0-000-WEB-2.pdf.
  22. ADP 5-0, The Operations Process, 5-34.
  23. ATP 5-0.3, Operation Assessment: Multi-Service Tactics, Techniques, and Procedures for Operation Assessment (Washington, DC: U.S. GPO, February 2020), 37, accessed 29 September 2022, https://armypubs.army.mil/epubs/DR_pubs/DR_a/pdf/web/ARN20851_ATP_5-0x3_FINAL_WEB.pdf.
  24. FM 6-0, Commander and Staff Organization and Operations, 4-1.
  25. ATP 5-0.3, Operation Assessment, 37–38.
  26. Ibid., 39, table 9.
  27. Ibid., 40.
  28. Ibid., 50; JDN 1-15, Operation Assessment, II-10–II-11.
  29. Irving Janis, “Groupthink,” in A First Look at Communication Theory, ed. Emory A. Griffin (New York: McGraw-Hill, 1991), 237.
  30. Irving Janis, Groupthink (Boston: Houghton Mifflin, 1983), 9.
  31. Raymond S. Nickerson, “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,” Review of General Psychology 2, no. 2 (1998): 177, https://doi.org/10.1037/1089-2680.2.2.175.

 

Lt. Col. Mitchell Payne is an observer coach/trainer with the U.S. Army Mission Command Training Program at Fort Leavenworth, Kansas. His research areas include human resource development, organizational behavior, ecclesial leadership, and organizational culture. He has previously served as a battalion/task force commander, 3rd Infantry Division; brigade and battalion executive officer, 188th Infantry Brigade, First Army; and battalion and squadron executive officer, 1st Stryker Brigade Combat Team, 25th Infantry Division.

 

Army University Press Documentaries presents

 

Book Cover

Okinawa 1945: Typhoon of Steel, the second in a two-part series covering Operation ICEBERG and the U.S. Tenth Army’s securing of Okinawa. This documentary follows the actions of the invading U.S. forces against the fortified Imperial Japanese Army on Okinawa. Current doctrine concepts pertaining to defense and multidomain operations are covered throughout the film. The film can be reached on YouTube at the following website:

https://www.armyupress.army.mil/Educational-Services/Documentaries/Typhoon-Steel/

 

Back to Top

March-April 2023