June 2016 Online Exclusive Article

Assessing the Decisive Action Fight

Col. Joe Roach & Maj. Clay White

Article published on: June 24, 2016

Download the PDF PDF Download

Corps Commander receiving the assessment
Assessment is applicable across the range of military operations. It offers perspective and insight, and provides the opportunity for self-correction, adaptation, and thoughtful results-oriented learning. – Commander’s Handbook for Assessment Planning and Execution

Combat experience over the past twelve years has demonstrated the benefits of quantitative assessments in operational headquarters. Prior to the GWOT, doctrine governing staff organization and operations spoke generally of assessments as occurring within each staff section and presented them as synonymous with running estimates. By 2012, the usefulness of quantitative assessments was codified in Field Manual (FM) 6-0, Commander and Staff Organization and Operations, and the role of the operations research analyst in assessments was promoted.

Beginning in 2002,1 Operations Research/Systems Analysts (ORSAs) deployed with operational headquarters to facilitate quantitative assessments of effects. The ORSAs were there as the subject matter experts in the cataloging (databases), analysis and presentation of numerical data. This capability provided commanders with a means to underpin their awareness and decisions concerning the battlespace with tangible quantifiable evidence. Without a formally documented process or doctrine for operational assessment, the ORSAs in deployed headquarters developed techniques and ad hoc processes which they subsequently propagated across the community of practice. These practices were only applied to Stability Operations and counterinsurgency (COIN). As the conflicts in Afghanistan and Iraq were being scaled back, the Army recognized the need to reshape its force to “a force that is more broadly capable of missions across the range of military operations.”2

When writers began addressing assessments in later doctrine updates, they relied (correctly) on validated processes (figure 1). However few, if any, processes addressed operational assessments in an offensive decisive action context. The J7’s Commander’s Handbook for Assessment Planning and Execution provides a good overview of assessments and processes, however (as evidenced by the metrics used as examples), it is primarily oriented on assessments for a stability operations/COIN campaign.3 FM 6-0 provides an excellent guide for assessments including some limited examples of metrics to be used in a decisive action fight as well as examples drawn from stability operations.4 While this assessment guidance is well-crafted, it is imperative that assessment TTPs and training develops beyond the COIN environment. Additionally, proven techniques and procedures for executing operational assessments of decisive action operations are not readily available to headquarters and staffs within existing sources. More development and documentation of assessment doctrine is a must.

Figure 1: A process used in “typical” assessments

III Corps in the Decisive Action Fight

Beginning in 2014, III Corps executed a series of command post and warfighter exercises (WFX) acting as the Combined/Joint Force Land Component Command with multiple subordinate divisions conducting offensive operations in the Army’s Decisive Action Training Environment scenario. This scenario placed the Corps in a multi-division attack against a near-peer competitor. During the development and execution of these exercises, the III Corps staff determined that a quantitative holistic operational assessment can provide the commander a truthful evaluation of the effectiveness of his unit, even in the time constrained targeting cycle of a Corps’ offensive fight.

The context for these exercises was the Phase III (Dominate) fight, characterized by a high operations tempo (OPTEMPO) and rapidly changing conditions. The Corps’ battle rhythm, in turn, reflected that high OPTEMPO, with nineteen daily battle rhythm events, including ten staff working groups across warfighting functions (WfF) that ultimately fed the Joint Targeting Board chaired by the Commanding General (Figure 2).

Figure 2: The Corps’ three-day battle rhythm cycle

Such a congested and brisk battle rhythm precluded adding a standalone Assessment Working Group, and the rapidly changing operational environment made timely execution of a separate Assessment Board for the commander impractical. Finally, the length of each WFX was only nine days, which provided little opportunity to include assessment processes that followed a weekly, monthly, or even quarterly battle rhythm, as had been the case with the established practice of operational assessment in the COIN environment.

The Assessment Cell nested its efforts in the daily rhythm of the targeting cycle, which supported an assessment framework that was applicable to the Corps’ objectives and focus in the Phase III fight. The staff review of the assessment process integrated into the Targeting Working Group, and the Corps Joint Targeting Board became the venue for presenting the assessment to the commander (Figure 3).

Figure 3: The streamlined process

The Corps Joint Targeting Board provided the Commander recommended targeting guidance, priorities, and refinements to scheme of fires across a 72- to 96-hours Future Operations planning horizon, nested with the Air Tasking Order (ATO) cycle. The format of the board progressed from a review of current operations, through refinements to the upcoming cycle, to approval and forecasting for fires tasks at the far end of the horizon. Additionally, it served as a staff synchronization mechanism which facilitated the Commander’s visualization of the battlefield. This battle rhythm event provided the best forum to maintain and present a topical assessment of the Corps’ progress towards the desired conditions that were most relevant in the targeting horizon. By presenting the assessment at the targeting board, the assessment cell enabled the commander to decide on the allocation of efforts and resources in light of their current demonstrated efficacy.

The assessment framework necessarily followed the operational phasing that emerged from Corps’ planning. Since each phase of the operation had its own defined end state conditions, creating a single enduring assessment framework was impractical. Given the emphasis on planning for Phase III (Dominate) of the operation, the initial assessment framework was oriented on measuring efforts only toward that phase’s end state, based on tasks and conditions that resulted from course of action development and wargaming. The exercise scenario began with the Corps already postured to transition from Phase II to Phase III, with planning for Phase IV (Stabilize) to be completed during exercise execution.

The desired end state conditions for Phase III and the Corps’ High Payoff Target List (HPTL) informed the assessment framework during planning. The complete end state included conditions for the friendly forces, allies, civilians, and terrain, but the nature of the Phase III made the enemy conditions a priority. Phase III end state conditions broadly defined desired effects on the enemy’s force in the aggregate, while the HPTL provided a more detailed analysis of specific enemy systems and units, as well as the doctrinal tasks to be applied against them. From those products the assessment planners developed an initial set of desired effects, written as a targeted system and the desired state, e.g. “Enemy Integrated Air Defense System (IADS) Defeated.” The assessment planners derived the desired state for each effect from the friendly tasks related to that target or system. This ensured that the assessment was fully nested with the Corps’ efforts to achieve the desired conditions, and it established a logical method to set appropriate measures of effectiveness (MOE) and measures of performance (MOP) for each assessed effort. Using the initial HPTL also predisposed the assessment to be incorporated smoothly into the targeting process during execution.

While an outcome (or state) that is applied to a target (or system) can be subjectively defined with its nested MOE, in the offensive fight the staff does not have the luxury of time with a cross-functional working group to shape the definition. This shortfall is easily mitigated, however, through using tactical tasks and Essential Fire Support Tasks, designated and approved by the commander.5

The Assessment Cell translated doctrinal tasks to develop desired conditions to facilitate MOE development. For example, during Phase III operations as part of WFX 15-3, the Corps sought to defeat the 18th Division Tactical Group (18 DTG). Army Doctrine Reference Publication 1-02 defines “defeat” as a tactical task where “…The defeated force’s commander is unwilling or unable to pursue that individual’s adopted course of action… and can no longer interfere to a significant degree with the actions of friendly forces.”6 Three aspects are present in this definition: enemy willingness, enemy ability and interference with actions of friendly forces. Deconstructing that definition and specifying the direction of change led to three distinct measures of effectiveness for achieving defeat of any enemy unit/system. Applying the direction to each measure (in this case decrease for all three) makes the MOE a complete metric, and suggests how to select quantifiable indicators for each metric. The resultant MOE were: “Decrease in 18 DTG willingness to fight,” “Decrease in 18 DTG ability to continue blocking vic OBJ GIANTS,” and “Decrease in 18 DTG impact on friendly maneuver.”

The assessment planners developed each desired condition or operational objective to the MOE-level during the COA development step of planning, with proposed indicators mapped to the respective MOE. The assessment planners refined the specific indicators through polling the staff upon execution of the combined arms rehearsal. Over the course of several exercises, the Assessment Cell validated several typical indicators and data sources that support frequently used metrics. For example, a commonly used indicator is an estimate of an enemy unit’s remaining combat power (to measure progress toward defeating that unit), provided by the G2. The assessment framework published in the OPORD was defined only down to the MOE level of specificity to give greater flexibility in modifying indicators to reflect an evolving operational environment.

Indicators guided assessment of each MOE during execution. For the defeat task, these were “Is the 18 DTG retrograding East or South (Y/N)?,” “What is the 18 DTG’s remaining combat power (%)?,” and “”What % of 1 ID lead brigades are East of PL Gary?” For example, regarding the first indicator, a defensive posture oriented westward would be evidence of their willingness to fight; the G2 provided a determination on the status of that indicator from collection on the 18 DTG. The second indicator spoke directly to the 18 DTG’s remaining ability to fight. Finally, the third indicator spoke to the effect that the 18th DTG was having on friendly force’s scheme of maneuver.

Figure 4: A screenshot from the tool used to calculate the assessment

The use of a methodology and structure that was easily understood while also unbiased was key to maximizing participation of other WfF. The structure applied to calculate the quantitative assessment was a simple linear value function which was applied to normalized scores for each respective data indicator. This involved no higher math than simple addition and multiplication and allowed WfF subject matter experts to immediately observe the impact of the settings that they placed on weights and thresholds.

The Assessment Cell normalized the indicator values to a continuous scale ranging from 0 to 3 in order to make comparisons between the indicators. On the scale a “0” score was the worst and “3” was the best. The score was interpolated uniformly between thresholds for easy interpretation: “Red” for scores between 0 and 1, “Amber” for 1 to 2, and “Green” for 2 to 3 (Figure 4). This allowed for a de-facto comparison of “apples to oranges” in weighting. The resultant score for each indicator, within a given MOE, were multiplied by their respective weights and divided by the sum of weights to produce an MOE score. The MOE scores, within a given effect, were also multiplied by their respective weights and divided by the sum of weights to produce an overall score for each desired condition, which were transcribed from the calculation tool (in a spreadsheet) to briefing slides for presentation.

In the briefing product, the Assessment Cell represented the overall scores for each desired condition using a color scale within the shape of an arrow to indicate both the current state and the direction of change from the previous assessment. Additionally, the briefing product included a summary of the respective indicators and highlighted the raw metrics that primarily drove the trend. Finally, the major efforts applied to achieving that condition in the previous targeting cycle were listed in order to see correlation of effort to progress where applicable. This provided the commander with a quick snapshot of the Corps’ efforts alongside the actual difference that those efforts were making on the campaign’s outcome (Figure 5). The presence of the significant metrics provided transparency and, therefore, credibility to the movement on the color scale.

Figure 5: A sample of an assessment

The Corps battle rhythm presented practical challenges to collecting data and completing the assessment prior to the Joint Targeting Board, which took place only a few hours after the end of the cycle being assessed. The compressed time available for synthesizing the assessment proved to be the major difference between this process and the established practice of operational assessments in stability operations. At the Targeting Working Group held the night before each board, the Assessment Cell presented, for approval, the metrics, indicators, and thresholds (without current data).

That forum for validating and sharing of the assessment framework enabled decentralized submission of data to the Assessment Cell without a separate assessment working group. As soon as the targeting cycle ended the following morning, the Assessment Cell collected and calculated the quantitative assessments, documented the applied efforts, and drafted conclusions and recommendations. This draft assessment was rapidly staffed through the G5, G2, and Fire Support Coordinator before being incorporated into the Targeting Board.

What We’ve Learned

Five key practices enabled the successful execution of operational assessments in the increased tempo of the Corps’ offensive fight. These recommendations differ from the stability operations assessment techniques and procedures developed over the last decade. However, they apply to all types of assessment, and should be incorporated into a doctrinally-based check on any assessment plan.

1. The assessment plan should be kept as simple as possible. The value of assessments hinges on credibility; this is easiest to establish when all the participants are able to understand the process. “Keeping it simple” compels the Assessment Cell to balance rigor with basic processes in calculating the quantitative assessment.

To this end, using linear value functions (sum products divided by sum of weights) and linear weighting to translate indicator values into MOEs effectively balances transparency and usefulness. Subsequently, linear value functions are used to translate MOE to effect scores. Additional techniques for maintaining simplicity include: seeking objective, quantitative indicators where possible; avoiding redundant or auto-correlated indicators for the same MOE; and using subjective binary indicators sparingly, and when used, attempting to group together several binary values together to measure an MOE.

2. The assessment team should work closely with the staff to set appropriate thresholds for indicators within their respective WfF—they are the experts. Aside from deferring to the expertise of WfF representatives, the Assessment Cell is better able to provide an objective perspective when they are not dictating the scale used to measure. The G2 and Joint Fires Cell may provide a majority of the indicators used in assessments of decisive action operations, but all staff sections should have input.

3. The assessment framework must balance between flexibility to handle the evolution of desired conditions over time and rigidity to preclude “moving the goalposts.” Over the course of a dynamic operation (with multiple phases) the desired effect to be applied to a given target may change. The staff must maintain a record of what metrics have changed and why, and any proposed changes should be validated through the Targeting Working Group, to ensure that efforts are focused on the same goal.

4. The quantitative assessment should be viewed as only one tool to assist the commander’s understanding of the operation. It does not provide a complete picture. There will be factors that the staff cannot periodically or reliably collect data on, that have a significant impact that one cannot forecast, and that cannot be rigorously quantified. The staff and commander should not ignore those variables.

Subjective judgments may provide a more complete picture of the situation. The use of quantitative assessments should act as a baseline, or foundation, upon which an overarching assessment (that incorporates qualitative and/or anecdotal factors) is built. When briefing the assessment, the staff should make note also of what the assessment does not include. In doing so, the staff does not rely, solely, on what the numbers say, but also does not ignore what they say.

5. The assessment plan should avoid burdening subordinate units with additional information collection requirements; if possible, use data already resident in Corps staff running estimates and reports. This practice is noted in Chapter 15 of FM 6-0, as a guideline for selecting and writing MOE, but it becomes essential during decisive action operations. While there is always a desire for more information, adherence to this maxim provides two benefits. First, it negates the need to confirm availability of the data. If the data is already being collected, the Assessment Cell can comfortably assume that it will be readily available, and the schedule for collection and processing is already known. Second, additional man-hours or other resources are not unnecessarily diverted from being applied elsewhere.

Further Study and Practice

The Assessment Cell provided a daily product that successfully demonstrated progress towards end state conditions. That said, there is always room for improvement in correlating the Corps’ effectiveness to the efforts actually applied. In order to provide that analysis of “juice” to “squeeze,” the complete methodology must capture and quantify efforts across all warfighting functions using measures of performance, then normalize those measures to a consistent scale (e.g., how does one compare the level of effort applied to nonlethal Information Operations tasks vs. lethal fires?). This will provide greater rigor in any correlations made between effort and progress, and give the commander greater insight into understanding how to direct the operation towards the desired end state.

Quantitative operational assessment proved to be relevant to the Corps in decisive action. Techniques developed since 2002 for use in stability operations required modifications to account for a more dynamic operational environment. The operational assessment methodology for decisive action requires additional refinement to be more responsive to a commander’s decision making requirements during continuous, high tempo operations.

Notes

  1. Center for Army Analysis, Deployed Analyst History Report – Volume I, CAA-2009185 (Fort Belvoir, VA, March 2012), 22.
  2. U.S. Department of the Army, Army Strategic Planning Guidance, 2014.
  3. U.S. Joint Staff J-7, Commander’s Handbook for Assessment Planning and Execution, Version 1.0 (Suffolk, Virginia: U.S. Joint Staff, 9 September 2011).
  4. Field Manual 6-0, Commander and Staff Organization and Operations (Washington, D.C.: U.S. Government Printing Office [GPO], 5 May 2014), chapter 15.
  5. Doctrinal defintions for these tasks are available in Army Doctrine Reference Publication (ADRP) 1-02, Terms and Military Symbols, (Washington, D.C.: U.S. GPO, 2 February 2015), 1-24.
  6. Ibid.
 

Col. Joe Roach, is currently assigned as the Chief of Assessments, III Corps, at Fort Hood, Texas. He holds an M.S. in Operations Research at Kansas State University. Col. Roach previously served as an assessment officer in Multi-National Corps-Iraq, ISAF Joint Command and Multi-National Division-Baghdad. He has also served as an operations research analyst at I Corps and the TRADOC Analysis Center.
Maj. Clay White, is currently assigned as an assessment officer at III Corps, deployed in support of Operation Inherent Resolve. He holds B.A. and M.S. degrees from the University of Virginia and and an M.S. from the Florida Institute of Technology. Maj. White has previously served as an infantry officer in the 101st Airborne Division and the 82nd Airborne Division. He also served as an operations research analyst for the TRADOC Analysis Center at Fort Lee.