Journal of Military Learning
 
 

Not That StraightforwardPeer-Review

Examining and Enhancing Soldier Development

Randy J. Brou1, Jayne L. Allen1, Krista L. Ratwani2, Frederick J. Diedrich3, Tatiana H. Toumbeva4, and Scott M. Flanagan5

1 U.S. Army Research Institute for the Behavioral and Social Sciences, Fort Benning, Georgia

2 Aptima Inc., Arlington, Virginia

3 Independent Consultant, Groton, Massachusetts

4 Aptima Inc., Orlando, Florida

5 Sophia Speira LLC, Carthage, North Carolina

 

Download the PDF Download the PDF

Abstract

Within the Army, traditional assessment methods often focus on whether a soldier “has” or “does not have” an adequate level of an attribute or competency. An assumption underlying such straightforward methods is that soldier development is linear and consistent from one context to the next. When soldier development is assessed over time, the resulting graph will appear messy; it is likely to feature peaks and valleys rather than proceed straight forward toward a desired benchmark. This is because context matters. In this article, we present a conceptual approach to understanding the interactions between the elements of attributes/competencies and the contexts in which they are manifested that may facilitate moving from have/have not assessment methods to contextually sensitive methods. Using an example, we illustrate the decomposition of an attribute and the surrounding context to create more granular assessments sensitive to such interactions. We then explore the contextual elements more or less likely to impact specific attribute elements by considering how they relate. The final section of this article contains a short discussion of two potential assessment methods that may allow the concepts presented here to be investigated and applied.

 

Military personnel must operate in ever-changing environments throughout their careers. The requirement to respond effectively to various situations necessitates that soldiers possess an array of attributes and competencies beyond the tactical and technical skills needed for any given context. These attributes and competencies are described in the Army’s leader requirements model (LRM) contained in Army Doctrine Publication 6-22, Army Leadership and the Profession (U.S. Department of the Army [DA], 2019a). Targeted assessments are critical to understanding whether and how soldiers are developing various aspects of their leadership capacity.

The Army has rigorous selection and assessment processes that incorporate both cognitive and noncognitive predictors of performance (e.g., Farina et al., 2019). In institutional and unit training environments, a soldier’s specific skill or knowledge is often assessed using cutoff scores or other benchmarks that determine whether a soldier “has” or “does not have” an adequate level of a specific skill, attribute, or competency for a given purpose (see Truxillo et al., 1996). This straightforward approach is often necessary to maintain standards in selection and placement; however, the approach needed to support individual growth is one that both determines a soldier’s current level of skill and informs strategies for further development. What does it mean for an individual to “have” a certain attribute? Based on that answer, what are the implications for development?

Imagine that a soldier is stationed at Fort Drum, New York. On a brisk, 20-degree early March morning, that soldier completed a two-mile run in a qualifying time. Based on this and other scores, the leader concludes that the soldier possesses high fitness, an element of presence within the LRM. Now that this soldier is deemed to have fitness, can that soldier be expected to have it if he or she maintains his or her workout routine? Suppose that his or her next professional military education course is at Fort Benning, Georgia. On a humid, 85-degree morning in late May, the soldier runs two miles in a nonqualifying time, resulting in a no-go mark on fitness. Does the soldier lack the attribute of fitness now? Did he or she ever have it? Is that even the right question to ask?

To further elaborate, based on what we know so far, the soldier may or may not have fitness; the probability is 0.5. If he or she moves on to Fort Lewis, Washington, that fall and completes the two-mile run in a qualifying time, the probability becomes 0.66. We may now be able to better assert that the soldier has fitness; it is more likely than not. Alone, this simple calculation is not enough. If our true purpose is to predict how this soldier is likely to perform when deployed, we need to know how (and if) Fort Drum and Fort Lewis are similar to each other and different from Fort Benning. We also need to know how the relevant features of such differences are manifested in the region of the upcoming deployment. Generally, we must refine our understanding of change in the attribute of interest and in the context in which that attribute is displayed. Only then may we develop informed predictions and targeted interventions.

In the case of testing for fitness, it is unsurprising that context matters. At a minimum, results are likely to be affected by the weather. Our argument is that regardless of the targeted LRM element, such dependency on context is the rule rather than the exception. Graphs of soldier development are likely to appear unique and jagged (e.g., Rose, 2016) featuring peaks and valleys rather than progressing straight toward a desired benchmark. Soldier assessment and development must be sensitive to these complexities. The Army must bolster traditional assessment approaches to better support individual development throughout a career. From an instructional perspective, the focus must shift from determining whether a soldier has or does not have an adequate level of an attribute or competency to maximizing the probability of a soldier behaving in a desirable way across a range of contexts. Given that the operational environments in which soldiers perform is often dynamic, an approach to assessment that accounts for the details of context will be more useful for predicting success and identifying areas of intervention.

The purpose of this article is to present a conceptual approach to moving toward contextually sensitive assessment methods. These methods must account for the interactions between the measured attribute and the elements of the surrounding context. Within this article, we provide examples of decomposing the attribute or assessed competency and the surrounding context to create more granular assessments that are sensitive to such interactions. We use an exemplar to identify the contextual elements more or less likely to impact specific attribute elements by considering how the two relate. That level of information enables precise, systematic identification of the areas where soldiers may excel or where additional learning may be necessary. The final section of this article contains a short discussion of two potential assessment methods that may allow the concepts presented here to be investigated and applied. This work complements Army talent management initiatives such as the Army’s “Project Athena” that seek to focus soldier self-development activities based on completed self-assessments (Center for the Army Profession and Leadership, n.d.). Our work adds to such efforts by examining how targeted assessments can become more precise by accounting for the surrounding context.

Theoretical Underpinnings

Many developmental theories are stage-based, whether they cover a topic as broad as the human personality or as narrow as leadership skill. Such theories characterize development as a progression through a series of underlying mental structures or schema that typify each stage. Initially simple mental representations become more complex understandings, and these changes potentially extend throughout the lifespan (Kegan, 1982; Kohlberg, 1969). Like Piaget’s theory (1952, 1983) in which they are rooted, stage-based theories describe change over time as a progression in the mental structures that a person has, which in turn define his or her developmental stage. The problem with these theories, however, is that development is messier than a well-ordered series of stages implies. Researchers (e.g., Mischel & Shoda, 1995; Rose, 2016; Thelen & Smith, 1994) have argued that a wide range of behaviors depend on if-then signatures (if in context A, then behavior B). These claims imply that developmental milestones are not universal. Instead, the milestones are dependent on the historical and cultural context (Rachwani et al., 2020). The critical insight from these theories is that the behaviors we see throughout human development are nuanced and highly dependent on context.

Like lifespan development, leader development is a complex construct that unfolds differentially over time as leaders face changing contexts. Stratified systems theory (Jacobs & Jaques, 1987) explicitly dictates that the context in which leaders operate changes when moving into different positions. For example, the decision space in which a division commander must operate is broader and more complex than the decision space in which a company commander must operate. Each new leadership level requires more complex skills. While sound judgment is important for each leader, exercising sound judgment is different given changes in scope, span of control, and scale of the situation. Stratified systems theory lays the foundation for examining the context in which leaders develop and perform.

Applying a stage-based approach in a military context, Bartone et al. (2007) conducted a longitudinal study of cadets at the U.S. Military Academy to examine their psychosocial development and performance as leaders. Significant positive trends in development were found for 47% of the cadets; however, these changes were not shown by the remaining 53% of participants. Data showed that leadership development did not consist exclusively of growth, an insight also noted by Baltes (1987) and echoed by Day et al. (2021). There may be negative changes in the assessed outcomes prior to seeing a positive change (Day & Sin, 2011). Differential growth rates and patterns will occur depending upon the specific skill or competency assessed (e.g., Kragt & Day, 2020).

This idea of individual differences and nonlinear patterns is also supported by recent data on the development of the Army Values (part of the LRM category of “character”). In a series of peer evaluations conducted with trainees during Basic Combat Training (BCT), the trainees rated one another on the degree to which they exhibited the Army Values. On average, these ratings improved over time, suggesting growth in the group. However, the results demonstrated that an individual soldier’s progress varied extensively (see Figure; Toumbeva et al., 2019). Individual trends showed that some soldiers received increasingly higher peer ratings from the beginning (red phase) to the middle (white phase) to the end of the course (blue phase). Some soldiers, however, received higher ratings during white phase compared to blue phase. Others received the same ratings in red and white phases followed by better ratings in blue phase. Interpreting such data becomes challenging. At what point do we say that a soldier has the Army Values?

Brou-fig1

From Toumbeva, T. H., Diedrich, F. J., Flanagan, S. M., Naber, A., Reynolds, K., Shenberger-Trujillo, J., Cummings, C., Ratwani, K.L., Ubillus, G., Nocker, C., Gerard, C. M., Uhl, E. R., & Tucker, J. S. (2019). Assessing character in U.S. Army initial entry training (ARI Technical Report 1373). U.S. Army Research Institute for the Behavioral and Social Sciences.

 

Collectively, these theories lay the foundation for the idea that development is complex, nonlinear, characterized by individual differences, and impacted by context. To help move toward an understanding of the probability that an individual will perform successfully across contexts, we must decompose attributes and situations to an appropriately granular level.

Attribute and Situation Decomposition

The attributes and competencies described in the LRM are complex and multifaceted. Similarly, the contexts in which soldiers operate vary by mission, team, location, and threat. We argue that to enable more precise comparisons across contexts and over time, the attributes under assessment and the situations where those attributes are exercised must be understood at a granular level. More granular attribute facets and contextual elements can then be mapped to one another to identify the aspects of an attribute likely to be stressed by a given situational element. We illustrate this concept using one exemplar attribute (builds trust) and sample contexts in which that attribute must be displayed. This example sets the stage for future research to investigate these relationships empirically.

Attribute Decomposition

An important part of understanding how an individual develops is identifying how the nuances of the competency or attribute of interest interact with the specific contextual demands the individual faces. Finding an appropriate level of granularity is a significant part of this challenge. The Army’s LRM contains six leadership attributes and competencies, which are further broken down into 24 subattributes and competencies (DA, 2019a). For example, the subcompetency “builds trust” (a component of the larger category “leads”) is defined in such a way that it can be distilled into multiple elements. If the focus is broadly on how builds trust develops over time, it may be difficult to predict the specific contexts in which a soldier will struggle or excel. This difficulty is because the contextual demands faced by the soldier are likely interacting with elements at a finer level of granularity. An appropriate level of granularity would be one that can be shown to directly relate to contextual demands, and ideally, one that enables actionable feedback. This does not imply a fully reductionist approach. Instead, from a functional perspective, the issue is the level of granularity that permits reliably using attribute-situation interrelations to understand and guide development.

Builds trust is useful to consider as an example because it is foundational to effective mission command (DA, 2019b), and as such, speaks directly to how the concepts introduced here might be applied to a critical issue for the Army. The first step in decomposition was reviewing relevant literature for extant conceptualizations of dimensions relating to building trust. Next, we referenced previously developed behavioral rubrics to determine facets and themes based on how builds trust has been operationalized for various Army contexts (e.g., Ingurgio et al., 2020; Toumbeva et al., 2018).

Based on our review, trust is generally defined as positive perceptions, beliefs, or expectations about the intentions of others and their competence, benevolence, integrity, and dependability, irrespective of the ability to monitor or control them (Dietz & Den Hartog, 2006; Mayer et al., 1995; Möllering, 2006). Trust is strengthened over time in several ways (Lewicki et al., 2006; Shapiro et al., 1992). Trust grows as individuals communicate through repeated, tactful, and multifaceted interactions that enable individuals to get to know one another so well that one person can predict the other’s behavior (e.g., what the other thinks, prefers, wants, does, needs). Engaging in two-way communication that enables the sharing of knowledge and information contributes to the development of mutual understanding and trust. Trust is also developed as individuals create a collective (shared) identity, purpose, and vision over time and demonstrate a reciprocated interpersonal care and concern. This aspect of trust is reflected in individuals taking consistent, deliberate, and voluntary action to provide support to one another at the right place and time, without bias or display of favoritism, and ideally in a proactive manner. Support may be emotional, physical, or instrumental. Support entails looking out for others, protecting their interests, accessibility, modeling positive behaviors, and empowering others. Trust is also based on participative decision-making, as characterized by cooperative, inclusive behaviors such as consulting others, proactively seeking others’ perspectives, and giving feedback in a respectful manner while making decisions. Consistently demonstrating sound decision-making builds confidence in the competence of others and fosters trust. As individuals learn they can count on others to perform actions consistent with training and development in their role, they become more comfortable taking risks and accepting vulnerability. Good character, reflected in an individual’s moral attitudes and actions, is a critical driver of trust and greatly influences perceptions of trustworthiness. Examples of character include doing what is right despite risks for adverse consequences, taking the hard right over the easy wrong, placing mission over personal needs, honesty about one’s own strengths and weaknesses, and behaving in a manner that demonstrates integrity, respect, empathy, and loyalty.

Collectively, these findings suggest that critical elements for building trust include communication, support, participative decision-making, sound decision-making, and character. To enable considerations of how a situation might differentially draw upon these elements, Table 1 shows example questions that could be asked to understand the element-specific stressors of a situation, which might be coded as yes/no or high/medium/low.

Brou-table1

Situation Decomposition

Similarly, we explored the situation decomposition process by conducting a review of relevant literature and holding discussions with subject-matter experts (SME) to explore how factors might impact behavior. For the SME contributions, two retired noncommissioned officers helped the research team translate existing frameworks into dimensions that might be usefully applied to military settings. Both SMEs had over 20 years of experience in the Army, during which they developed their skills across a wide variety of situations. Both had also served as instructors throughout their careers which allowed them to provide insights into what types of experiences would be developmental in nature for soldiers.

First, we reviewed extant taxonomies of situational elements from the literature. DIAMONDS is a popular taxonomy that breaks down situations in terms of eight psychologically meaningful dimensions (duty, intellect, adversity, mating, positivity, negativity, deception, and sociality), thus providing a common language for research in this area (Rauthmann et al., 2014). The CAPTION model contains another set of dimensions (complexity, adversity, positive valence, typicality, importance, humor, and negative valence) that have been shown to predict psychological outcomes such as behavior and motivation (Parrigon et al., 2017). Each situational framework breaks down the environment into measurable and quantifiable elements that are perceived as psychologically salient, such as persons/interactions, events/activities/objects, and location (i.e., who, what, and where; Rauthmann et al., 2014). In working with the SMEs, we reviewed these taxonomies based on knowledge of what is meaningful in military settings. For instance, within DIAMONDS, the dimension of adversity is captured by the question: Is someone threatened? This dimension seems clearly relevant for military operations. In contrast, the dimension of mating is less relevant and is defined by the question: Is the situation sexually or romantically charged?

Based on this initial review, we then worked with our SMEs to identify similar questions that might be asked about contexts that a soldier may encounter. We explored the nature of these example contexts using questions such as those in the DIAMONDS framework, which were iteratively expanded and refined. The purpose of this step was to build on the elements derived from the literature. This ensured their utility in describing various military settings.

The resultant situational elements framework is contained in Table 2. The elements were categorized according to who was involved in the situation, what was to be done, where the situation was occurring, and how the task demands shaped the necessary efforts. Like the attribute decomposition, when exploring example situations, each element was expressed as a question that could be coded (e.g., yes/no, high/medium/low). The situational elements that will be relevant when assessing a given competency are likely to vary (e.g., the physical demands are more likely to matter when assessing fitness than builds trust), as we hypothesize that the interaction between situational elements and competency/attribute elements is of primary importance.

Example Situation and Attribute Mapping

To illustrate application of the approach, SMEs used the questions shown in Table 2 to examine sample contexts. For example, one situation considered was the Teamwork Development Course (TDC) in BCT. The TDC includes a variety of obstacles that, to successfully overcome, require the trainees to collaboratively solve problems. The obstacles vary in difficulty, require completion within a certain time, and include the resources necessary to succeed. The possible solutions are not obvious. While the trainees are generally motivated to succeed in crossing the obstacles, the primary purpose is not to solve the obstacles per se. The emphasis is on participative decision-making, communication, and provision of support. As stress escalates due to obstacle difficulty and time constraints, the event can also highlight elements of character. Trainees can cheat on some obstacles as drill sergeants move between stations while trainees work independently. Trainee leaders may emerge but are not assigned.

Brou-table2

As a second example, the SMEs explored an event modeled on personal experiences where an inexperienced platoon leader (PL) is deliberately challenged to learn how to balance and manage the needs of the team with the needs of the larger organization. Building trust can be difficult for new leaders as they seek to address the needs of their subordinates while managing expectations of superiors. In this example, the unit is engaged in a reconnaissance training activity. The company commander (CO CDR) requests the PL have the unit ready to go by a specific time, but the team requires additional time for preparation. The PL must navigate the interpersonal dynamics of the situation to meet the timeline without compromising the team. While mission is first, the PL must also be aware of second-order consequences (e.g., feelings of the team that their leader did not back them up). The assumption for this event is that the CO CDR deliberately sets up this tension to help the PL learn in a training setting.

These two situations illustrate how builds trust is not monolithic; instead, specific elements are differentially stressed by the situational factors that influence task execution. Using the questions in Table 1 that reflect the elements of builds trust, the events seem similar. Both situations require a high amount of communication. However, they are different. The TDC example is highly reliant on participative decision-making while the junior PL example emphasizes individual decision-making. Likewise, because the TDC uses the obstacles as a vehicle to promote and learn about teamwork, solving the obstacle (i.e., demonstrating sound decision-making) is less important than in the junior PL example. In that context, the CO CDR wants to know if the PL can solve the problem of balancing needs.

Digging deeper into the situations using the questions in Table 2, we also see that the specific situational contexts are similar but different. For instance, with respect to Who, the individuals in the TDC are at the same organization level, whereas the individuals involved in the PL example are by design at different levels (subordinates vs. leaders). This difference might contribute to the differential stress on the participative decision-making element of builds trust. Likewise, the types of stressors represented by the How element differ. The social and affective demands on the junior PL threaten more lasting consequences than the TDC, which in turn could influence the differences in the sound decision-making element under stress.

Even though this example does not address all questions shown in Tables 1 and 2, the use of just a few illustrative questions begins to unpack how the conditions under which builds trust must be demonstrated are not the same. These simple questions provide a way to begin to systemically understand what changes in different contexts.

Looking Forward: Future Assessment Methodologies

The conceptual approach introduced here offers a theoretical view of how to begin building an assessment method that fully embraces the complex and dynamic contexts in which warfighters operate. We argue that traditional assessment methods that use a binary snapshot at one point in time do not provide the necessary details to fully inform predictions of future success in a complex world. Instead, assessments must move to a contextually sensitive approach that allows stakeholders to gather performance data in a variety of circumstances. To maximize the utility of such assessments, performance must be understood in relation to a specific context (e.g., this soldier can perform well given time pressure under conditions X, Y, Z) and a specific element of an attribute (e.g., participative decision-making, rather than builds trust). By decomposing both the surrounding context and the attribute under assessment, well-informed decisions can be made about a soldier’s strengths and areas for improvement.

Here we showcased the use of a series of questions to decompose attributes and situations. These questions can help us make better comparisons between performance contexts. The comparisons hinge on the way an event stresses elements of an attribute. Assessing both attributes and situations begins to provide the tools to move toward nuanced assessments. Those assessments might increase confidence that a soldier would exhibit an attribute based on specific patterns of previous experience. For example, performance on building trust can be anticipated to the extent that the history of behavior in prior conditions matches future requirements. This is like predicting whether a soldier has fitness using the history of prior testing events.

Currently, the situational framework illustrated here is merely a hypothesis, though we anticipate that the types of questions presented will matter for myriad attributes and competencies. The next step is to illustrate how to use this framework to build assessments and in so doing, to verify how the answers to the kinds of questions posed in the tables might affect the probability of a soldier behaving in accordance with an attribute. Leveraging the process illustrated here to document the surrounding context, two existing assessment methodologies could be refined to move beyond a binary has/has not methodology. Situational judgment test (SJT) items may be used for systematic manipulation and assessment of elements, while scenario injects may be used during live training events. Both SJTs and scenario injects can be intentionally designed to assess attributes using specific contextual features that are the target of the training event.

SJTs are short vignettes (scenarios) that describe the context of a problem followed by a “what would you do?” type response. There is typically no clear, obvious “right” answer. More sophisticated SJTs (see Brou et al., 2018) can also present problems that unfold differently based on the nature of initial responses. SJTs have been shown to predict behavior across a range of settings and situations (see Motowidlo et al., 2006). Similarly, vignettes in scenario-based training exercises (i.e., injects) could be employed. They are a widely used method for assessing and developing critical skills in realistic, operationally relevant situations (Cannon-Bowers & Salas, 1998; Martin et al., 2009; Oser, 1999; Zook et al., 2012). Live scenario-based exercises can therefore be used to systematically expose individuals to situational elements that draw out informative patterns of behavioral variability. Methods such as these, if used throughout a program of instruction, could allow instructors to deliberately build competence in ways that make it robust across contexts.

Discussion and Conclusions

Using controlled experiments, future research could obtain quantitative evidence of the impact of situational elements on specific performance criteria, thus shedding light on the deeper structure of individual leader performance and the utility of the concepts outlined here. Empirical identification of critical dimensions would enable development of contrasting scenarios. Using those scenarios in an instructional approach emphasizing student exploration across a problem space may increase the likelihood that an attribute would be displayed in novel circumstances. In domains such as physics where the problem space is well-defined, the use of contrasting cases has been shown to increase the likelihood of knowledge transfer (Schwartz et al., 2011).

We acknowledge that the approach presented here is not without its challenges, especially from a practical perspective. To implement such an assessment method would, at least initially, require additional work from the individuals responsible for assessment and development. However, once fully developed, there are likely technological approaches that can be harnessed to help track, analyze, and predict the types of attribute and context interactions explored here (e.g., through machine-learning applications). However, before this approach is ready for implementation, research must be conducted to understand the impact of the context more fully on attribute and competency development. For instance, such research may reveal that certain situational factors are more consequential than others, that behaviors are stable within certain ranges of situational factors, or that attributes interact with each other in complex ways. It is expected that the number of significant interactions will be manageably finite, such that interventions can be implemented at scale. Army systems that meticulously track soldiers’ accomplishments such as marksmanship status may also preserve the context in which that status was obtained. We begin here by introducing an approach to capture complexity. Future research will need to explore solutions that leverage that knowledge in service of development.

We are certainly not the first to assert that context matters when attempting to predict behavior (Mischel & Shoda, 1995; Rose, 2016). In this article, we expanded on ideas related to individualized, nonlinear, and dynamic development based on context. We articulated methods for identifying and labeling contextual elements to enable systematic determination of how context matters in assessment. Contextual elements interact with granular elements of attributes, resulting in jagged developmental trajectories. Recognizing that jaggedness in and of itself is insufficient to inform assessment, we have started to describe attributes at an actionably granular level. The aim of future research could be to provide evidence that exposure to specific contexts as a function of jagged profiles of competencies will promote development. If such evidence could be provided, then we would be well on the way to formulating precise methods and tools for promoting leader development.

The research described herein was sponsored by the U.S. Army Research Institute for the Behavioral and Social Sciences, Department of the Army (Contract No. W911NF-20-F0007). The views expressed in this paper are those of the authors and do not reflect the official policy or position of the Department of the Army, DOD, or the U.S. government.


References

Baltes, P. B. (1987). Theoretical propositions of life-span developmental psychology: On the dynamics between growth and decline. Developmental Psychology, 23(5), 611–626. https://doi.org/10.1037/0012-1649.23.5.611

Bartone, P. T., Snook, S. A., Forsythe, G. B., Lewis, P., & Bullis, R. C. (2007). Psychosocial development and leader performance of military officer cadets. The Leadership Quarterly, 18(5), 490–504. https://doi.org/10.1016/j.leaqua.2007.07.008

Brou, R., Stallings, G., Stearns, I., Normand, S., & Ledford, B. (2018). Building automated assessments of interpersonal leadership skills. In Proceedings of the 2018 Interservice/Industry Training, Simulation and Education Conference. http://www.iitsecdocs.com/volumes/2018

Cannon-Bowers, J. A., & Salas, E. (1998). Individual and team decision making under stress: Theoretical underpinnings. In J. A. Cannon-Bowers & E. Salas (Eds.), Making decisions under stress: Implications for individual and team training (pp. 17–38). American Psychological Association.

Center for the Army Profession and Leadership. (n.d.). Project Athena leader self-development tool. https://capl.army.mil/Project-Athena/sd-tool/#/

Day, D. V., Riggio, R. E., Tan, S. J., & Conger, J. A. (2021). Advancing the science of 21st-century leadership development: Theory, research and practice. The Leadership Quarterly, 32(5), 1–9. http://dx.doi.org/10.1016/j.leaqua.2021.101557

Day, D. V., & Sin, H. P. (2011). Longitudinal tests of an integrative model of leader development: Charting and understanding developmental trajectories. The Leadership Quarterly, 22(3), 545–560. http://dx.doi.org/10.1016/j.leaqua.2011.04.011

Dietz, G., & Den Hartog, D. N. (2006). Measuring trust inside organizations. Personnel Review, 35(5), 557–588. http://dx.doi.org/10.1108/00483480610682299

Farina, E. K., Thompson, L. A., Knapik, J. J., Pasiakos, S. M., McClung, J. P., Lieberman, H. R. (2019). Physical performance, demographic, psychological, and physiological predictors of success in the U.S. Army Special Forces Assessment and Selection course. Physiology & Behavior, 210, Article 112647. https://doi.org/10.1016/j.physbeh.2019.112647

Ingurgio, V. J., Ratwani, K. L., Nargi, B., Flanagan, S., Diedrich, F. J., & Toumbeva, T. H. (2020). Tools for assessing and tracking leadership attributes and competencies (ARI Research Report 2030, DTIC No. AD1112648). U.S. Army Research Institute for the Behavioral and Social Sciences. https://apps.dtic.mil/sti/pdfs/AD1112648.pdf

Jacobs, T. O., & Jaques, E. (1987). Leadership in complex systems. In J. Zeidner (Ed.), Human productivity enhancement: Organization, personnel, and decision making (pp. 7–65). Praeger.

Kegan, R. (1982). The evolving self: Problem and process in human development. Harvard University Press.

Kohlberg, L. (1969). Stage and sequence: The cognitive developmental approach to socialization. In D. Goslin (Ed.), Handbook of socialization theory and research (pp. 347–480). Rand McNally.

Kragt, D., & Day, D. V. (2020). Predicting leadership competency development and promotion among high-potential executives: The role of leader identity. Frontiers in Psychology, 11, Article 1816. https://doi.org/10.3389/fpsyg.2020.01816

Lewicki, R. J., Tomlinson, E. C., & Gillespie, N. (2006). Models of interpersonal trust development: Theoretical approaches, empirical evidence and future directions. Journal of Management, 32(6), 991–1022. https://doi.org/10.1177%2F0149206306294405

Martin, G., Schatz, S., Bowers, C., Hughes, C. E., Fowlkes, J., & Nicholson, D. (2009). Automatic scenario generation through procedural modeling for scenario-based training. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 53(26), 1949–1953. https://doi.org/10.1177%2F154193120905302615

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734. https://doi.org/10.5465/amr.1995.9508080335

Mischel, W., & Shoda, Y. (1995). A cognitive-affective systems theory of personality: Reconceptualizing situations, dispositions, dynamics, and invariance in personality structures. Psychological Review, 102(2), 246–268. https://doi.org/10.1037/0033-295x.102.2.246

Möllering, G. (2006). Trust: Reason, routine, reflexivity. Elsevier.

Motowidlo, S. J., Hooper, A. C., & Jackson, H. L. (2006). Implicit policies about relations between personality traits and behavioral effectiveness in situational judgment items. Journal of Applied Psychology, 91(4), 749–761. http://dx.doi.org/10.1037/0021-9010.91.4.749

Oser, R. (1999). A structured approach for scenario-based training. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 43(21), 1138–1142. https://doi.org/10.1177%2F154193129904302105

Parrigon, S., Woo, S. E., Tay, L., & Wang, T. (2017). CAPTION-ing the situation: A lexically derived taxonomy of psychological situation characteristics. Journal of Personality and Social Psychology, 112(4), 642–681. https://doi.org/10.1037/pspp0000111

Piaget, J. (1952). The origins of intelligence in children. International Universities Press.

Piaget, J. (1983). Piaget’s theory. In P. H. Mussen & W. Kessen (Eds.), Handbook of child psychology: History, theory, and methods (Vol. 1, pp. 101–128). Wiley.

Rachwani, J., Hoch, J., & Adolph, K. E. (2020). Action in development: Plasticity, variability, and flexibility. In C. S. Tamis-LeMonda & J. J. Lockman (Eds.), The Cambridge handbook of infant development: Brain, behavior, and cultural context (pp. 469–494). Cambridge University Press. https://doi.org/10.1017/9781108351959

Rauthmann, J. F., Gallardo-Pujol, D., Guillaume, E. M., Todd, E., Nave, C. S., Sherman, R. A., Ziegler, M., Jones, A. B., & Funder, D. C. (2014). The situational eight DIAMONDS: A taxonomy of major dimensions of situation characteristics. Journal of Personality and Social Psychology, 107(4), 677–718. https://doi.org/10.1037/a0037250

Rose, T. (2016). The end of average: Unlocking our potential by embracing what makes us different. HarperCollins.

Schwartz, D. L., Chase, C. C., Oppezzo, M. A., & Chin, D. B. (2011). Practicing versus inventing with contrasting cases: The effects of telling first on learning and transfer. Journal of Educational Psychology, 103(4), 759–775. http://dx.doi.org/10.1037/a0025140

Shapiro, D., Sheppard, B. H., & Cheraskin, L. (1992). Business on a handshake. Negotiation Journal, 8(4), 365–377. https://doi.org/10.1111/j.1571-9979.1992.tb00679.x

Thelen, E., & Smith, L. B. (1994). A dynamic systems approach to the development of cognition and action. MIT Press.

Toumbeva, T. H., Diedrich, F. J., Flanagan, S. M., Naber, A., Reynolds, K., Shenberger-Trujillo, J., Cummings, C., Ratwani, K. L., Ubillus, G., Nocker, C., Gerard, C. M., Uhl, E. R., & Tucker, J. S. (2019). Assessing character in U.S. Army initial entry training (ARI Technical Report 1373, DTIC No. AD1077839). U.S. Army Research Institute for the Behavioral and Social Sciences. https://apps.dtic.mil/sti/pdfs/AD1077839.pdf

Toumbeva, T. H., Ratwani, K. L., Diedrich, F. J., Flanagan, S. M., & Uhl, E. R. (2018). Development of a behaviorally anchored rating scale for leadership (ARI Research Product 2018-06, DTIC No. AD1048729). U.S. Army Research Institute for the Behavioral and Social Sciences. https://apps.dtic.mil/sti/pdfs/AD1048729.pdf

Truxillo, D. M., Donahue, L. M., & Sulzer, J. L. (1996). Setting cutoff scores for personnel selection tests: Issues, illustrations, and recommendations. Human Performance, 9(3), 275–295. https://doi.org/10.1207/s15327043hup0903_6

U.S. Department of the Army (2019a). Army leadership and the profession (Army Doctrine Publication 6-22). U.S. Government Publishing Office. https://armypubs.army.mil/epubs/DR_pubs/DR_a/ARN20039-ADP_6-22-001-WEB-0.pdf

U.S. Department of the Army (2019b). Mission command: Command and control of Army Forces (Army Doctrine Publication 6-0). U.S. Government Publishing Office. https://armypubs.army.mil/epubs/DR_pubs/DR_a/ARN18314-ADP_6-0-000-WEB-3.pdf

Zook, A., Lee-Urban, S., Riedl, M. O., Holden, H. K., Sottilare, R. A., & Brawner, K. W. (2012). Automated scenario generation: Toward tailored and optimized military training in virtual environments. Proceedings of the International Conference on the Foundations of Digital Games, 164–171. https://doi.org/10.1145/2282338.2282371

Randy J. Brou is a team leader and research psychologist with the U.S. Army Research Institute for the Behavioral and Social Sciences. He holds a PhD in applied cognitive science from Mississippi State University.

Jayne L. Allen is a research psychologist with the U.S. Army Research Institute for the Behavioral and Social Sciences. She holds a PhD in personality and social psychology, as well as a cognate in the science of college teaching, from the University of New Hampshire.

Krista L. Ratwani is a principal scientist and the vice president of research at Aptima where she focuses on learning and development throughout careers. She holds a PhD in industrial-organizational psychology from George Mason University.

Frederick J. Diedrich is a consultant who focuses on methods of instruction and assessment designed to deliberately support competency and attribute development. He holds a PhD in cognitive science from Brown University.

Tatiana H. Toumbeva is a senior scientist and team lead in the Training, Learning and Readiness Division at Aptima with expertise in assessment tool development and validation, training evaluation, and occupational health psychology. She holds a PhD in industrial-organizational psychology from Bowling Green State University.

Scott M. Flanagan is a retired Army master sergeant who served his career in the U.S. Special Operations Command and who now provides operational consulting focused on purposeful development and assessment of leader competencies and attributes.

Back to Top

October 2022