July 2024 Online Exclusive Article

Air Assault!

Applying Learning Science to Army Skill and Knowledge Acquisition

Gregory I. Hughes, PhD
Shanda D. Lauer, PhD
Wade R. Elmore

 

Download the PDF Download the PDF

 
CH-47 Chinook helicopter crews assigned to the 82nd Combat Aviation Brigade, 82nd Airborne Division, conduct sling load operations in conjunction with paratroopers assigned to the 82nd Airborne Division Artillery

Editor’s note: “Air Assault” was initially published in the April 2024 edition of the Journal of Military Learning. To view this issue or any past issues, please visit https://www.armyupress.army.mil/Journals/Journal-of-Military-Learning/.

To ensure force readiness, soldiers in the U.S. Army must acquire critical knowledge and skills at an incredible rate. They are expected to retain and recall this knowledge throughout their careers not only in garrison environments but also in austere, high-stakes, and stressful conditions. As time and resources available for training and education are constrained, it is imperative to optimize these activities using all the resources available to the Army. Although Army schools are highly successful at preparing soldiers for their duties, there are techniques that could improve education and training that have been underexplored in military contexts. Over the past several decades, researchers in the cognitive sciences have identified techniques that reliably enhance long-term learning outcomes, even with little to no investment of time or resources.1 However, these techniques have overwhelmingly been explored in laboratory settings, civilian educational environments (i.e., kindergarten to college), and sports. The purpose of this study was to explore how learning techniques that require minimal investment of time and resources could be integrated into an Army education and training environment. Specifically, we partnered with the Sabalauski Air Assault School at Fort Campbell, Kentucky, to explore these research questions.

Learning Sciences

Among the most potent learning techniques are practice testing, spacing out learning sessions, and interleaving learning materials. Research overwhelmingly demonstrates that practice testing leads to superior learning compared to an equivalent amount of time reviewing material.2 The superiority of practice testing has not only been documented when compared to less effortful study methods like rereading and highlighting but also to deeper, conceptual, and/or elaborative methods of studying (e.g., idea mapping, sentence generation, and creating mnemonic devices).3 Spacing is another potent technique. The spacing effect refers to the finding that it is better to spread out the studying of a topic into multiple instances across time compared to an equal amount of time studying that topic in a single session (e.g., a one-hour learning session on four separate days compared to a single four-hour learning session).4 Relatedly, interleaving is a method of reviewing material that is similar to the spacing effect but carries an additional advantage. The interleaving effect is the finding that studying various topics in an alternating fashion (ABABABAB) is often better than studying one topic entirely before moving on to another (i.e., blocking: AAAABBBB).5 Interleaving necessarily involves some degree of spaced learning, since the study of one topic is divided into temporally distinct instances. A unique benefit of interleaving is that it juxtaposes different topics, allowing learners to compare and contrast the shared and distinct features of each topic. This juxtaposition, termed discriminative contrast, is useful when categories of knowledge share many features in common, making it difficult for learners to notice the subtle differences that separate them.6

Although these learning techniques entail their own unique advantages, their efficacy is underpinned by similar mechanisms. There are two mechanistic frameworks that parsimoniously explain these benefits. One is the principle of transfer-appropriate processing, which states that performance is optimized when the cognitive processes involved in training match those that are called upon during the later testing of those skills.7 This framework explains why practice testing is effective, as it requires people to recall information from long-term memory, which is precisely what is normally asked of them during their graded exams. Similarly, spacing is effective because when learners are assessed, there has usually been an appreciable amount of time since the last study episode. Spaced learning approximates the experience they will later have when their knowledge or skill level is formally assessed. Another is the principle of desirable difficulty, which states that learning is optimized when people are practicing at a moderate level of difficulty.8 The most used learning techniques are shallow and low effort (e.g., rereading), keeping the level of challenge too low to spur sufficient growth and progress.

To determine where and how these techniques could be implemented at the Sabalauski Air Assault School at Fort Campbell, Kentucky, we conducted focus groups and interviews with the instructor cadre. Overwhelmingly, the cadre expressed that a single component of the air assault course resulted in more failures than any other: identifying errors in equipment rigged to aircraft that would endanger in-flight operations (sling load inspection). In this context, the sling is the name for the equipment that attaches cargo (a load) to a rotary-wing aircraft. Incorrectly rigging the load to the aircraft can endanger in-flight operations by creating aerodynamic instability. Correct rigging is therefore vital to successful air assault operations. In the present study, we worked with the cadre to modify the training of sling load inspection and compared course outcomes with the previous methods of training.

img2

Sling Load Inspection

In the air assault course, soldiers learn to inspect four loads (see figure 1): the A-22 Cargo Bag, M1151 HMMWV (i.e., a Humvee truck), M1102 Trailer, and 5K Cargo Net. The skill essentially consists of two simultaneous tasks: (a) performing a recommended inspection sequence, a systematic method of reviewing the equipment in a particular order/manner ensuring full coverage of the rigging and load; and (b) a categorization task in which pieces of the equipment are judged as operable or deficient (see figure 2). The identification of deficiencies is the true focus of the task, as these are defined as errors in the rigging that would threaten the viability of safe in-flight operations.

To pass the air assault course, soldiers must successfully conduct sling load inspection on four different types of loads (see figure 1). For each load, soldiers must identify three out of four deficiencies in under two minutes. Although a specific inspection sequence is taught and strongly recommended by instructors, it is not required during testing and soldiers are not penalized for deviating from that sequence. After the first round of testing is complete, soldiers who failed any of the loads receive additional instruction and then are given a second opportunity to conduct the sling load inspection on each type of load they failed. On the second test, the sling loads may have an entirely new set of deficiencies. A soldier who fails any load twice also fails the entire course.

Soldiers are trained on sling load inspection through a mixture of classroom presentations, in-person lectures with the equipment, and hands-on practice (practical exercises). Learning science techniques could be integrated into any of these learning activities and/or at-home study materials. For the purposes of our project, we limited our efforts to modifications of the practical exercises that would require virtually no increase in time or resources to implement. We made this decision for three reasons. First, the majority of training time is spent on the practical exercises, meaning that an intervention in this part of the course would likely exert the largest effects on the learning outcomes. Second, modifications to the practical exercises would circumvent adherence problems that would likely occur with voluntary after-hours exercises or with at-home study materials. Third, the practical exercises are the part of the training that is most similar to the actual hands-on sling load inspection test. This means that any improvements in these exercises would be most likely to transfer to the hands-on tests.

Motivated by the principle of transfer-appropriate processing, we decided to explore how making the practical exercises more like actual testing conditions would affect course outcomes. Recall that testing conditions require soldiers to inspect loads and identify three out of four rigged deficiencies in under two minutes per load. The practical exercises deviate from these conditions in two critical ways. First, half of these exercises are performed on clean loads, which have no deficiencies rigged on the equipment, but soldiers are only presented with loads that do have deficiencies during testing conditions (dirty loads). Second, the practical exercises are not timed, meaning that soldiers never get accustomed to the feeling of time pressure and/or establish an appropriate pace and rhythm for conducting their inspections. The cadre emphasized that soldiers frequently struggled with the time pressure of their tests, causing many soldiers to go too quickly or too slowly. Therefore, we had the cadre make all the practical exercises done with (a) only dirty loads (four deficiencies rigged on the equipment) and (b) time pressure. The cadre decided to set the timers for three minutes rather than the two-minute standard used during actual testing conditions. Although this timing component did not precisely reflect testing conditions, it perhaps struck a balance between making the practical exercises more test-like and making the task too difficult for novices (i.e., two minutes may have been undesirably difficult).

img3

Notably, conducting the practical exercises with all dirty loads challenged an intuitive notion held by many members of the cadre, which is that time spent with clean loads is uniquely valuable for honing the skill of sling load inspection. The basic idea is that by spending time with clean loads, a soldier learns “what right looks like,” and consequently, deviations from “right” would leap out at the soldier, who would then call out a deficiency. Replacing this time with more exposure to dirty loads would hypothetically put the cart before the horse, undermining the acquisition of what “right” looks like.

There is ample scientific evidence to call this notion into question. This comes from a literature on visual category learning, which investigates similar skills to sling load inspection but with different materials. Sling load inspection is fundamentally a series of discrete visual categorization tasks in which soldiers deem subcomponents of the rigging as belonging to one of two categories: functional or deficient. Although the inspection sequence involves interacting with the equipment physically, the categorization component of the task is primarily visual in nature. The deficiencies are identified based on appearances rather than tactile cues (e.g., the absence of a castellated nut, a twist in a strap, or a misrouted chain can all be identified by sight alone; see figure 2). Visual categorization experiments, such as those that involve determining whether chest X-rays exhibit healthy lungs or signs of disease, involve the same underlying cognitive mechanisms.

In the terminology of the research on visual category learning, some members of the cadre saw value in “blocking” the study of categories (i.e., study the category of “clean” before “dirty”). Early researchers examining visual categorization felt similarly, arguing that it makes sense to master one category before moving onto another (e.g., for categories clean [C] and dirty [D], the sequence could look like CCCCDDDD).9 However, this method is usually not as effective as alternating between examples of each category (i.e., interleaving; CDCDCDCD), especially when the features that discriminate the categories are subtle deficiencies, which is typical of sling load inspection (e.g., the orientation of a small castellated nut can distinguish between clean and deficient; see figure 2).10 Interleaving is beneficial for learning because it highlights and draws attention to the critical differences between categories (e.g., clean vs. dirty), making the learning process more efficient by promoting discriminative contrast.11 In the context of sling load inspection, interleaving would mean examining a clean version of a piece of equipment (e.g., a correctly rigged 188-inch strap) and then studying a dirty version of that equipment (a version with a deficiency; e.g., a twisted 188-inch strap). This type of juxtaposition would only occur during dirty load sessions because they entail a mixture of clean and dirty equipment. An additional benefit of this kind of study method is that it keeps learners engaged. Blocked learning sequences tend to be too predictable and result in boredom.12

Method

Participants. We obtained data from a total of 2,826 soldiers who participated in the sling load portion of the air assault course. The treatment group consisted of six classes (N = 656). The control group was composed of the preceding fourteen classes (N = 2,170). Each class was taught by one of three instructor teams.

Procedure. The Combined Academic Institutional Review Board of Army University provided a human-subjects research determination of exempt research project with concurrence from the U.S. Army Combat Capabilities Development Command Soldier Center Human Research Protections Office. The exempt categorization was due to the research occurring in normal established classroom settings, involving normal educational practices, and being unlikely to negatively impact students’ ability to learn required educational content. For the treatment classes, we had the cadre brief soldiers on our efforts to evaluate the efficacy of course modifications and inform soldiers that they could opt out of their data via a web link. No soldiers opted to withhold their data from the project.

Table1

In the treatment classes, we had the cadre modify the practical exercises in six classes by (1) replacing all clean loads (no deficiencies rigged) with dirty loads (four deficiencies rigged) and (2) introduce time pressure by limiting soldiers to three minutes per sling load practical exercise. For the control classes, we asked the cadre to provide historical data from the preceding classes, which we used as baseline performance levels. For all classes, we asked the cadre to record the performance of each soldier for each load on the initial test and the retest. We also requested the cadre provide us with individual soldier characteristics that they identified as significant predictors of performance, which included soldier rank and temporary duty status (whether a soldier was permanently stationed at Fort Campbell or was on orders from another location).

Results

We used fixed and mixed logistic regression modeling to analyze binary outcome data and adopted an alpha rate of .05. The analyses were conducted in R (a statistical computing language).13 We used the lme4 package for logistic regression modeling and the emmeans package for analyzing estimated marginal means.14 The primary dependent variable of interest was whether a soldier passed the hands-on sling load test. We were unable to analyze the data in a more granular way, as the schoolhouse only provided us with performance data on each test (first or retest) and each load for less than half of the collected sample (for 1,142 out of the 2,826 soldiers). An analysis on this subset of data would be problematic because we would be unable to control for several contaminating factors, the importance of which will become clear in the subsequent analysis. Note that of the six treatment classes, only four incorporated the element of time pressure. Nevertheless, we analyzed all six treatment classes as a single unit, as all of them used dirty loads during the practical exercises.

Hands-on sling load test. For the hands-on sling load test, soldiers in the treatment group (M = 84.99%) outperformed those in the control group (M = 77.30%) by 7.69 percentage points, β = .51, p < .0001. However, there were differences across the groups that could have accounted for this increase in pass rate rather than the modified practical exercises. To evaluate this possibility, we examined the contribution of several variables the schoolhouse cadre identified as potential confounds, including average class size, instructor teams, and two variables pertaining to class composition (TDY status and soldier rank). Ultimately, we planned to fit a model that accounted for any of the factors that may have unfairly influenced the between-groups comparison.

Class size. The average class of the treatment group (M = 111) was smaller than that of the control group (M = 171), suggesting the possibility that the smaller class size underlay the enhanced pass rate. However, the pass rate of the smallest 10 classes (M = 79%) was not reliably different than the largest ten classes (M = 79%), t(18) = 0.08, p = .94, d = 0.03. We therefore did not include this variable in our final model.

Instructor teams. The number of soldiers taught by each instructor group was not equal across groups, X2(2) = 16.69, p < .001 (see table 1). For example, 40% of soldiers in the control group were taught by Team A, but only 31% of soldiers in the treatment group were taught by Team A. This was problematic because the overall pass rate of Team A (70%) was lower than Teams B (90%) and C (81%), suggesting a confound in the difference in pass rates among the groups.

Table2

TDY status. Next, we looked at whether each soldier’s home station was Fort Campbell, meaning that the air assault school was local to them, or if they were traveling to attend this course from another installation (i.e., they are on temporary duty, or TDY). As shown in table 2, soldiers who were TDY (M = 88%) passed at a higher rate than those who were local (M = 77%), β = .82, p < .0001. On average, the proportion of TDY soldiers was higher in the treatment group (M = 28%) compared to the control group (M = 18%), β = .58, p < .0001, resulting in an artificial advantage of the former over the latter.

Soldier rank. We next turned our attention to soldier rank. For the sake of a simpler analysis, we created three bins for soldier rank: junior enlisted, senior enlisted, and officer. As shown in table 3, higher rank soldiers (M = 90%) passed at a higher rate than lower ranked soldiers (M = 76%), β = 1.10, p < .001. As shown in table 3, the rank composition of the treatment and control groups were not identical, X2(2) = 37.13, p < .001. For example, junior-enlisted soldiers were a greater proportion of the control (M = 56%) compared to the treatment group (M = 43%). Again, this was a confound that benefitted the pass rate of the treatment group.

Final model. We used mixed-effects logistic regression to create a model that predicted the effect of treatment group on pass rates while accounting for instructor teams (random effect), TDY status (fixed effect), and rank (fixed effect). Treatment group was coded as 0 (control) or 1 (treatment); TDY status as 0 (local) or 1 (TDY); and rank as 0 (enlisted) or 1 (officer).15 We evaluated the significance of the fixed and random effects by conducting chi-square likelihood ratio tests on the change in model fit (deviance) on a model-to-model basis (for the model outputs, see table 4). The degrees of freedom of these chi-square tests is the difference in the number of model parameters between the two tested models. We added effects one at a time, and if the model fit improved at a statistically significant level, then we deemed that effect significant. Notably, the model terms in this analysis are in log-odds units rather than the probability scale (i.e., probability of passing the hands-on test). Where appropriate, we convert these log-odds outcomes to probability scale to aid interpretability of the results.

We started with a null model, which included only an intercept and no fixed or random effects. We then added a random effect of instructor team, which significantly improved model fit, X2(1) = 99.08, p < .001, confirming significant variation in performance across teams. Next, we added TDY status as a fixed-effects predictor, which was also significant, X2(1) = 40.43, p < .001. Soldiers who were on TDY (M = 92%) passed their tests at a higher rate than those who were not (M = 85%). There was an effect of soldier rank, X2(1) = 59.80, p < .001, and a TDY-by-rank interaction, X2(1) = 3.92, p = .048. For enlisted soldiers, those who were on TDY (M = 89%) significantly outperformed those who were not (M = 77%), but the same was not true for officers (M = 92% and 93%, respectively). There was an effect of group, X2(1) = 8.58, p = .003, but none of the two-way or three-way interactions with group were significant (ps > .36). To quantify the effect of group, we calculated estimated marginal means that were weighted according to characteristics of the entire sample (e.g., both group means were weighted assuming 14% of soldiers were both enlisted and on TDY, which was the overall sample average across groups). As shown in table 5, the advantage of the treatment group (M = 87.41%) over the control group (M = 81.75%) was 5.66 percentage points, which was 2.03 points smaller than the raw data means that did not account for differences between groups in the variables of interest.

Table3

Discussion

The results of this experiment suggest that the practical exercises should be made more like actual testing conditions by (1) using only loads rigged with deficiencies and (2) incorporating time pressure. After accounting for differences in sample composition between the control and treatment groups (e.g., rank composition), the two changes to the practical exercises resulted in a 5.66% increase in sling load pass rates. This increase was achieved at essentially no additional investment of time or resources. This seemingly modest increase in pass rate scales up to a significant impact across an entire year of air assault courses. We observed an average class size of 153 soldiers, and we would expect approximately 125 of those soldiers (81.75%) to pass the sling load inspection test with the traditional practical exercises. With the modified practical exercises, we would expect approximately 134 soldiers (87.41%) to pass that portion of the class, an increase of nine soldiers. The Sabalauski Air Assault School conducts about forty air assault classes per year, meaning that the modified practical exercises would lead to roughly 360 more soldiers passing their sling load inspection annually. The modified practical exercises would therefore result in an increase of about 2.88 classes worth of sling-load test graduates (i.e., 360/125). Increasing pass rates at the air assault course represents a force multiplier, both directly through increasing the number of air assault certified soldiers and indirectly by opening up space for more soldiers to take the course. Critically, the increases in pass rates that we observed in the present study were accomplished without modifying the long-established Army standards.

Limitations

One limitation of the present experiment was that although all six of the treatment classes only used dirty loads during the practical exercises, only four of those classes incorporated the element time pressure. It is not possible to determine the separate and joint contributions of each change. Nevertheless, we do suspect that replacing the clean loads with dirty loads made the larger contribution to the increased pass rate. After implementing the change in load type, pass rates increased and remained stable with the addition of time pressure. Of course, future work would be needed to resolve these questions. Another limitation is that we could not examine performance on the individual loads with an adequate level of precision due to gaps in the data set. It is conceivable, for example, that the changes to the practical exercises affected some loads more than others (e.g., preferentially improved the easiest or hardest).

Future Directions

The learning sciences can be applied to other areas of the air assault course. Practice testing, spacing, and interleaving can be incorporated into classroom activities and/or review materials for use outside of the classroom. We investigated the latter option in another research study, which involved deploying learning content through a web-based and mobile learning platform.16 Within the classroom, the lectures could be periodically punctuated by small practice tests or brief review of previously introduced content (i.e., spacing). Of course, these types of interventions could be applied to any other course that requires fact-based learning and/or physical skills. For these categories of learning interventions, there are many potential ways to implement them, which can have measurable impacts on outcomes (e.g., the type and/or timing of feedback during practice testing).17 Moreover, these techniques can be combined with other types of learning techniques, like elaborative encoding (e.g., creating links with old knowledge or generating memory mnemonics) or fading (sequencing material by level of difficulty).18

Table4

Low-lift learning science interventions can also be applied to other Army schoolhouse settings. Of course, the results of our present work are most directly relevant to similar tasks trained elsewhere, like equipment inspection at the Advanced Airborne School (i.e., jumpmaster personnel inspection). That said, given that these techniques have been successful across a wide range of disparate tasks in civilian populations (e.g., radiology, art history, basketball), we have little reason to doubt the same would be true for cases of military application. For example, air defense artillery airframe identification involves categorizing different types of aircraft based on the noises they produce. As with the visual domain, learning auditory discrimination benefits from interleaved learning sequences due to similar cognitive mechanisms.19 The results of the present experiment would therefore likely extend to that context and possibly much less similar tasks.

One potential challenge of integrating effective learning science techniques into Army education settings is a common metacognitive illusion. Namely, the use of effective learning techniques often causes people to feel less confident in their learning outcomes than less effective alternatives.20 This is likely because the more effective techniques tend to be harder, forcing learners to become aware of gaps in their knowledge that less demanding techniques, like rereading notes, would not. Consequently, learners sometimes prefer the less-effective alternative because they falsely construe it as superior.21 For this reason, Army educators should consider educating soldiers about this metacognitive conundrum and inform them that difficulties experienced during learning process are often signs of progress, not evidence of failure.

Table5

Working with the schoolhouses, as opposed to the course proponent, has advantages and disadvantages. The main advantage is that implementing these relatively minor changes to Army courses only requires the commander’s discretion as they are not changes in the program of instruction. In addition, working with the schoolhouse leadership and cadre directly affords an opportunity to increase buy-in, which in turn can increase the probability of a successful outcome. However, there are two major disadvantages that should be considered: (1) future schoolhouse leadership can just as easily undo any course modifications, and (2) any potential changes to a course must not conflict with the program of instruction (i.e., the curriculum that is designed by the proponent). For these reasons, the proponent would be an important stakeholder for similar future research efforts.

Incorporating the findings of the present study into the training and education of future instructors and curriculum developers will aid the dissemination throughout the enterprise, regardless of location or proponent. The Common Faculty Development Instructor Course and the Common Faculty Development Developer Course, both taught by the Army University, could be additional areas to translate research findings to improve the quality of output in instruction, lesson plans, and curriculum design that directly impacts student outcomes across the Army Learning Enterprise.

Research was sponsored by the U.S. Army DEVCOM Soldier Center and was accomplished under Cooperative Agreement Number W911QY-19-2-0003. The opinions expressed herein are those of the authors and do not reflect those of the United States Army. The U.S. government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright notation hereon.

 


Notes External Disclaimer

  1. Nicholas J. Cepeda et al., “Distributed Practice in Verbal Recall Tasks: A Review and Quantitative Synthesis,” Psychological Bulletin 132, no. 3 (2006): 354–80, https://doi.org/10.1037/0033-2909.132.3.354; Jonathan Firth, Ian Rivers, and James Boyle, “A Systematic Review of Interleaving as a Concept Learning Strategy,” Review of Education 9, no. 2 (2021): 642–84, https://doi.org/10.1002/rev3.3266; Gregory I. Hughes and Ayanna K. Thomas, “Visual Category Learning: Navigating the Intersection of Rules and Similarity,” Psychonomic Bulletin and Review 28, no. 3 (2021): 711–31, https://doi.org/10.3758/s13423-020-01838-0.
  2. Olusola O. Adesope et al., “Rethinking the Use of Tests: A Meta-Analysis of Practice Testing,” Review of Educational Research 87 no. 3 (2017): 659–701, https://doi.org/10.3102/0034654316689306.
  3. Jeffrey D. Karpicke and Janell R. Blunt, “Retrieval Practice Produces More Learning than Elaborative Studying with Concept Mapping,” Science 331, no. 6018 (2011): 772–75, https://doi.org/10.1126/science.1199327; Jeffrey D. Karpicke and Megan A. Smith, “Separate Mnemonic Effects of Retrieval Practice and Elaborative Encoding,” Journal of Memory and Language 67, no. 1 (2012): 17–29, https://doi.org/10.1016/j.jml.2012.02.004.
  4. Hermann Ebbinghaus, Über das gedächtnis: Untersuchungen zur experimentellen psychologie [Memory: a contribution to experimental psychology] (Berlin: Duncker and Humblot, 1885); Cepeda et al., “Distributed Practice in Verbal Recall Tasks”; Peter F. Delaney et al., “Spacing and Testing Effects: A Deeply Critical, Lengthy, and at Times Discursive Review of the Literature,” in The Psychology of Learning and Motivation: Advances in Research and Theory, ed. Brian H. Ross (New York: Elsevier Academic Press, 2010): 63–147.
  5. Sinah Goode and Richard A Magill, “Contextual Interference Effects in Learning Three Badminton Serves,” Research Quarterly for Exercise and Sport 57, no. 4 (1986): 304–14, https://doi.org/10.1080/02701367.1986.10608091; Kellie Green Hall, Derek A. Domingues, and Richard Cavazos, “Contextual Interference Effects with Skilled Baseball Players,” Perceptual and Motor Skills 78, no. 3 (1994): 835–41, https://doi.org/10.1177/003151259407800331; Nate Kornell and Robert A. Bjork, “Learning Concepts and Categories: Is Spacing the ‘Enemy of Induction’?,” Psychological Science 19, no. 6 (2008): 585–92, https://doi.org/10.1111/j.1467-9280.2008.02127.x; Firth, Rivers, and Boyle, “A Systematic Review.”
  6. Robert L. Goldstone, “Isolated and Interrelated Concepts,” Memory and Cognition 24, no. 5 (1996): 608–28, https://doi.org/10.3758/BF03201087; Kornell and Bjork, “Learning Concepts and Categories”; Hughes and Thomas, “Visual Category Learning.”
  7. Teresa A. Blaxton, “Investigating Dissociations among Memory Measures: Support for a Transfer-Appropriate Processing Framework,” Journal of Experimental Psychology: Learning, Memory, and Cognition 15, no. 4 (1989): 657–68, https://doi.org/10.1037/0278-7393.15.4.657; C. Donald Morris, John D. Bransford, and Jeffery J, Franks, “Levels of Processing versus Transfer Appropriate Processing,” Journal of Verbal Learning and Verbal Behavior 16, no. 5 (1977): 519–33, https://doi.org/10.1016/S0022-5371(77)80016-9.
  8. Robert A. Bjork, “Memory and Metamemory Considerations in the Training of Human Beings,” in Metacognition: Knowing About Knowing, ed. Janet Metcalfe and Arthur P. Shimamura (Cambridge, MA: MIT Press, 1994), 185–205, https://doi.org/10.7551/mitpress/4561.001.0001; Robert A. Bjork, and E. L. Bjork, “Desirable Difficulties in Theory and Practice,” Journal of Applied Research in Memory and Cognition 9, no. 4 (2020): 475–79, https://doi.org/10.1016/j.jarmac.2020.09.003.
  9. Robert M. Gagné, “The Effect of Sequence of Presentation of Similar Items on the Learning of Paired Associates,” Journal of Experimental Psychology 40, no. 1 (1950): 61–73, https://doi.org/10.1037/h0060804; Kenneth H. Kurtz and Carl I. Hovland, “Concept Learning with Differing Sequences of Instances,” Journal of Experimental Psychology 51, no. 4 (1956): 239–43, https://doi.org/10.1037/h0040295.
  10. Hughes and Thomas, “Visual Category Learning.”
  11. Goldstone, “Isolated and Interrelated Concepts”; Sean Kang and Harold Pashler, “Learning Painting Styles: Spacing Is Advantageous When It Promotes Discriminative Contrast,” Applied Cognitive Psychology 26, no. 1 (2021): 97–103, https://doi.org/10.1002/acp.1801; Kornell and Bjork, “Learning Concepts and Categories.”
  12. Francisco Javier Guzman-Munoz, “The Advantage of Mixing Examples in Inductive Learning: A Comparison of Three Hypotheses,” Educational Psychology 37, no. 4 (2017), https://doi.org/10.1080/01443410.2015.1127331.
  13. R Core Team, “R: A Language and Environment for Statistical Computing,” R Foundation for Statistical Computing, 10 February 2015, https://www.gbif.org/tool/81287/r-a-language-and-environment-for-statistical-computing.
  14. Douglas Bates et al., “Fitting Linear Mixed-Effects Models Using Ime4,” Journal of Statistical Software 67 no. 1 (2015): 1–48, https://doi.org/10.18637/jss.v067.i01; Russell Lenth, “Emmeans: Estimated Marginal Means, AKA Least-Squares Means,” CRAN, accessed 10 July 2024, https://cran.r-project.org/package=emmeans.
  15. We treated soldier rank as a binary variable (enlisted = 0, officer = 1) to avoid an excessive number of model terms and convergence issues.
  16. Scotty Craig et al., “Investigating the Impact of Mobile Microlearning and Self-Regulated Learning Support on Soldiers’ Self-Efficacy and Retention within an Army Schoolhouse,” Journal of Military Learning 7, no. 2 (2023): 29–45, https://www.armyupress.army.mil/Journals/Journal-of-Military-Learning/Journal-of-Military-Learning-Archives/Conference-Edition-2023-Journal-of-Military-Learning/Mobile-Microlearning/.
  17. W. Todd Maddox, F. Gregory Ashby, and Corey J. Bohil, “Delayed Feedback Effects on Rule-Based and Information-Integration Category Learning,” Journal of Experimental Psychology: Learning, Memory, and Cognition 29, no. 4 (2003): 650–62, https://doi.org/10.1037/0278-7393.29.4.650; Harold Pashler et al., “Enhancing Learning and Retarding Forgetting: Choices and Consequences,” Psychonomic Bulletin & Review 14, no. 2 (2007): 187–93, https://doi.org/10.3758/BF03194050.
  18. Joel R. Levin, “Elaboration-Based Learning Strategies: Powerful Theory = Powerful Application,” Contemporary Educational Psychology 13, no. 3 (1988): 191–205, https://doi.org/10.1016/0361-476X(88)90020-3; Mark McDaniel, “Combining Retrieval Practice with Elaborative Encoding: Complementary or Redundant?,” Educational Psychology Review 35, no. 3 (2023), https://doi.org/10.1007/s10648-023-09784-8; Harold Pashler and M. C. Mozer, “When Does Fading Enhance Perpetual Category Learning?,” Journal of Experimental Psychology: Learning, Memory, and Cognition 39, no. 4 (2013): 1162–73, https://doi.org/10.1037/a0031679.
  19. Ruth Chen, Lawrence Gibson, and Geoffrey Norman, “Manipulation of Cognitive Load Variables and Impact on Auscultation Test Performance,” Advances in Health Sciences Education 20, no. 4 (2015): 935–52, https://doi.org/10.1007/s10459-014-9573-x; Sarah Shi Hui Wong, Si Chen, and Stephen Wee Hun Lim, “Learning Melodic Musical Intervals: To Block or to Interleave?,” Psychology of Music 49, no. 4 (2021): 1027–46, https://doi.org/10.1177/0305735620922595; Sarah Shi Hui Wong, Amanda Chern Min Low, and Stephen Wee Hun Lim, “Learning Music Composers’ Styles: To Block or Interleave?,” Journal of Research in Music Education 658, no. 2 (2020): 156–74, https://doi.org/10.1177/0022429420908312.
  20. Henry Roediger and Jeffrey D. Karpicke, “Test-Enhanced Learning: Taking Memory Tests Improves Long-Term Retention,” Psychological Science 17, no. 3, (2006): 249–55. https://doi.org/10.1111/j.1467-9280.2006.01693.x.
  21. Jeffrey D. Karpicke, “Metacognitive Control and Strategy Selection: Deciding to Practice Retrieval during Learning,” Journal of Experimental Psychology 138, no. 4 (2009): 469–86, https://doi.org/10.1037/a0017341.

 

Gregory Hughes, PhD, is a research psychologist at the U.S. Army Combat Capabilities Development Command Soldier Center at Natick, Massachusetts. Hughes obtained a PhD in experimental psychology from Tufts University and has been conducting Army research for eight years. His main research efforts focus on optimizing the acquisition and retention of new knowledge and complex skills.

Shanda Lauer, PhD, is a research psychologist in the Institutional Research and Assessment Division, Vice Provost of Academic Affairs, at the Army University in Fort Leavenworth, Kansas. She holds a master’s degree focused in discipline-based education research and a PhD in psychology with a neuroscience emphasis. Her program of research focuses on improving communication in the Army and enhancing education through technology use and the application of best practices.

Wade R. Elmore is a research psychologist at the U.S. Army Combat Capabilities Development Command Soldier Center at Natick, Massachusetts. In his ten years working for the U.S. Army, he has worked at the Center for Army Leadership and the Army University, and in 2021, he joined the Cognitive Sciences and Applications Team of Combat Capability Development Command Soldier Center. He has contributed to the enterprise level understanding of Army leadership and Army professional military education using Army-wide surveys. Currently, he is engaged in research examining the use of learning sciences best practices on military education and training, and the efficacy of applying these best practices in classroom instruction and through distributed asynchronous training and education platforms, characterizing soldier-relevant cognitive and physical traits, and characterizing tactical performance during sustained live-fire exercises.

 

Back to Top