Journal of Military Learning
 
 

Feedback from the Field for the Captains Career Course Common CorePeer-Review

Relevance to Outcomes-Based Military Education

Meredith Shafto

Army University

Download the PDF Download the PDF

Abstract

The Captains Career Course Common Core (C5) has undergone a major modernization effort since 2022 (Fortuna, 2023). While ongoing evaluations have provided feedback on the course experience (Shafto & Lauer, 2023), there are currently no methods for reliably linking C5 evaluations with operational performance. A report of the first year of evaluations can be found in Shafto and Lauer (2023). Defining operationally relevant outcomes and demonstrating that they have been achieved is a requirement of outcomes-based military education (OBME), a key approach to modernizing professional military education (Chairman of the Joint Chiefs of Staff [CJCS], 2020; Vandergriff, 2010). The current article uses an OBME framework to identify requirements for effective C5 external evaluations. Information to guide the development of evaluations was gathered via discussions with quality assurance officers at Captains Career Course (CCC) schools and centers of excellence, who administer CCC external surveys. These discussions revealed diverse approaches to CCC external evaluations and identified challenges and best practices for developing effective C5 external evaluations that support OBME requirements. The themes emerging from the quality assurance officer discussions contribute to a broader conversation about how institutions across the learning enterprise can support the goals of professional military education by establishing reliable feedback between operational and educational environments.

 

The Captains Career Course Common Core (C5) has undergone a major modernization effort since 2022 (see Fortuna, 2023). While evaluations during the course have provided valuable feedback (Shafto & Lauer, 2023), optimizing this and other professional military education (PME) modernization efforts requires measuring the impact of modernization on operational performance through effective external evaluations.1 A report of the first year of evaluations can be found in Shafto and Lauer (2023).

There are currently no methods for reliably measuring the impact of C5 instruction on operational performance after graduation. Quality assurance officers (QAO) across the schools and centers of excellence (COE) who teach at the Captains Career Course (CCC) administer external evaluations of the CCC, but they are not targeted to evaluate the impact of common core instruction specifically. See TRADOC Pamphlet (TP) 350-70-14, Training and Educational Development in Support of the Institutional Domain, for an overview of external evaluations (U.S. Army Training and Doctrine Command [TRADOC], 2021).

This article considers the challenges to developing an effective C5 external evaluation that supports the aims of PME modernization. To gather insight on the specifics of these challenges and how they may be addressed, respondents from QAOs at CCC schools and COEs provided information on their external evaluation practices. The results provide a summary of key findings and how they can be leveraged for the development of effective external evaluations for the C5.

C5 Modernization and External Evaluations

The proponent of the C5 is the Instructional Design Division, Vice Provost of Academic Affairs, Army University. The Instructional Design Division develops centralized curricula and lesson plans for five modules that constitute the C5: the Army profession, mission command, operational processes, operations, and training. The aim of the common core instruction is to provide baseline knowledge on essential leadership, operations, and training management abilities regardless of each officer’s specialization.

The fiscal year 2023/2024 modernization of C5 was initiated in late 2020, with implementation in October 2022. Key changes included a novel blended design for active-duty instruction, including a new distributed learning prerequisite prior to the residential course. Additional details of this phase of C5 modernization are provided in Fortuna (2023). Evaluation of the new C5 instruction began in October 2022 and has included feedback from students and instructors across the CCC schools/COEs (Shafto & Lauer, 2023). See Shafto and Lauer (2023) for a report of the first year of evaluation. The results of these evaluations were used to identify strengths and weaknesses of the distributed learning instruction and how the new distributed learning instruction impacted on residential common core instruction. However, these evaluations do not provide feedback from the field. Understanding the effectiveness of C5 modernization requires linking educational measures (such as feedback from students or performance such as exam results) with operational measures (such as feedback from graduates or professional performance measures) gathered after graduation.

An Outcomes-Based Approach Can Guide C5 External Evaluations

Establishing educational-operational links is necessary to align C5 modernization with the adoption of an outcomes-based approach (CJCS, 2020) to PME. Outcomes-based military education (OBME), and outcomes-based education more generally, advocates that education should be student-focused; this means shifting away from what needs to be taught and prioritizing what students need to learn. The “outcome” in OBME refers to a clear statement of what students should be able to know and do when finishing the course, and an OBME approach requires developing methods to justify and assess those outcomes (CJCS, 2020).

OBME is not an alternative to the widely implemented analyze, design, develop, implement, and evaluate (ADDIE) model of curriculum development. Rather, outcomes-based frameworks can be tested within the ADDIE process (Magallanes, 2019), and a targeted OBME approach is a means of supporting and optimizing ADDIE stages.

Adhering to OBME requires an approach that “focuses on outputs, emphasizing evidence collected from direct and indirect assessments of student performance both within and external to the learning environment” (CJCS, 2020, p. A-1). Most relevant for C5 evaluation, achieving OBME goals requires an evidence-based demonstration of real-world outcomes. That is, “the ultimate demonstration of PLO [program learning outcome] achievement … occurs post-graduation in follow-on professional work” (CJCS, 2020, p. A-2).

However, across the Army learning enterprise, there is no standard evidence-based approach to achieving the OBME goal of establishing predictable and operationally relevant external measures (e.g., Ellinger & Posard, 2023). Questions remain on how to define and evaluate post-course outcomes systematically (Ellinger et al., 2023) and how to link students’ educational achievements with their professional skills (Eldeen et al., 2018). Outside of the Army learning enterprise, predictive models are used to demonstrate OBME goals: with appropriately designed outcome measures, models can predict students’ final performance in a course (Brooks & Thompson, 2017) or predict postgraduate outcomes like employability (Othman et al., 2020). For C5, to establish reliable relationships between educational and operational measures, an effective external evaluation must have several key characteristics:

  1. Representative. External evaluation measures must be systematically collected to create representative datasets. This is a challenge because graduates may be difficult to contact or there may not be consistent opportunities after graduation to either provide evaluation feedback or measures of performance.
  2. Linkable. Linking educational and operational measures requires that both types of measures are observable and measurable (Rao, 2020; Schreurs et al., 2020) and grounded in a shared set of principles. This can be a challenge if available operational measures do not reliably reflect PME outcomes. Additionally, establishing a common framework pre- and post-graduation can be difficult because schools and COEs teaching the CCC have a wide range of operational goals, and student career opportunities and responsibilities vary both before and after their course. Establishing predictive relationships must account for variable student and graduate experiences.
  3. Actionable. To provide actionable feedback as part of the ADDIE process, measures must be specific enough to support decision making, and a reliable data infrastructure must exist not only to collect external evaluations but to also feed this information back to relevant stakeholders.

QAO CCC External Evaluations Can Inform C5 Evaluations

The remainder of this article outlines an initial response to the challenges above, which involved gathering information from QAOs about current CCC external evaluations. While these external evaluations do not focus on common core topics, they are clearly relevant as they gather responses from CCC graduates and query CCC-relevant topics. These evaluations also have two other characteristics that will provide important lessons for developing C5 evaluations.

First, QAO external surveys are well-established and standardized. Surveys are sent out at each school and COE six to 12 months after students graduate their courses. The requirements for external survey delivery are outlined in TP 11-21, Army Quality Assurance Program Procedures, and described in TP 350-70-14 (TRADOC, 2021, 2024b). TP 11-21 outlines requirements for collection and dissemination, including that institutions must “submit a quarterly summarized external survey data report to the HQ TRADOC QAO External Survey Program Manager, who prepares a summary of the aggregate results for the AQAP Director to brief TRADOC senior leaders” (TRADOC, 2024b, pp. 57–58). External surveys include three required questions. Graduates must be asked (1) if the training and education they received adequately prepared them to perform their jobs at their units, and (2) if they were trained and educated on the same equipment, or concepts, they use at their units; leaders must be asked (3) if the training or education that their personnel received adequately prepared them to perform their jobs at their units.

A second and complementary characteristic of these surveys is that they provide a useful range of different practices. While the use of required questions is a key benefit for standardizing quality control and accreditation efforts, QAO procedures also allow for variability in how individual institutions approach the external surveys. First, the required questions are a minimum, so that institutions can ask a wider range of questions. Second, institutions can “distribute their external survey reports to institutional stakeholders as required by local policy” (TRADOC, 2024b, p. 58), allowing the results of the survey to inform in-house processes at schools/COEs. Because the CCC is taught across a range of schools and COEs, a summary of QAO practices and procedures can provide information about different approaches to survey content, implementation, dissemination, and application.

QAO External Surveys: Feedback from the CCCs

The following section includes an overview of the methods including the discussions, results of the discussions, and feedback from the discussions.

Overview of Methods

Discussions were held with QAO representatives of Aviation Center of Excellence, Cyber Center of Excellence, Fires Center of Excellence, Intelligence Center of Excellence, Maneuver Center of Excellence, Maneuver Support Center of Excellence, Medical Center of Excellence, Mission Command Center of Excellence, and U.S. Army Institute for Religious Leadership. No school or individual will be attributed in describing feedback received.

Discussion topics were constructed to provide feedback on key topics from each representative. The three key topics were (1) the content, timing, and recipients of the external surveys; (2) how feedback from the surveys is used, including describing the relevant stakeholders; and (3) key challenges and desired improvements to the feedback process. The discussions were semistructured so that the conversations both covered key discussion topics and encouraged individualized input.

The primary focus of the questions was on the external survey procedure for CCC graduates, but because many QA officers are responsible for evaluating multiple courses, they often commented on a range of courses. Comments covering other courses are integrated here since the methodological lessons learned from a range of courses are likely to be relevant for developing C5 external evaluations. These discussions did not aim to evaluate the QAO external survey process but to use the range of experiences across the schools/COEs to provide insights for developing effective C5 external evaluations.

Results of QAO Discussions

This section summarizes the key themes that emerged from the discussions that can inform the development of C5 external evaluations.

Schools/COEs Take Different Approaches to External Evaluations. Respondents described a range of feedback approaches that extended beyond the required survey questions and the use of the survey format.

1. School-specific external survey content. While a few representatives indicated that only the three required QAO questions were administered in the external surveys, most indicated that they extended the questions on the external survey to include questions about tasks or skills that were specific to the school or COE’s course objectives.

2. Using external surveys for the ADDIE process. Only one representative indicated that they used the feedback exclusively for higher-level QAO purposes (sending a report to TRADOC QAO). Most respondents indicated wider use of feedback including sending results to local leaders, using results in postinstructional conferences and after action reviews (AARs), or providing findings to developers as input into the ADDIE course development process. The perceived usefulness of the external survey data for the ADDIE processes was mixed. While external survey data was always gathered and considered, the feedback that drove decisions sometimes came from other sources such as in-house surveys implemented by the course manager or director of training, or independent decisions from the commandant.

3. Alternative avenues of external feedback. Just as most respondents reported adding to the required QAO questions, most also reported other methods for gaining external feedback. For graduate feedback, some schools/COEs developed in-house surveys while others took advantage of AARs, critical task site selection boards, or job analyses as opportunities for getting feedback from the operational force. A minority of respondents described ongoing or planned initiatives that actively reach out to the operational environment, including gathering evaluations at umbrella weeks or sending representatives to combat training centers (CTC) to gather relevant feedback during and following training events. For leader feedback, a commonly reported tactic was to get leader feedback from those who have come for in-person PME such as precommand courses; a related approach was to seek informal discussions with senior leaders coming to invited events such as conferences.

Schools/COEs Face Challenges in Gathering Effective External Evaluations. As reported above, representatives across the schools/COEs suggested limits on the usefulness of using survey data alone, due to a set of common challenges in acquiring effective external evaluations.

1. Representative feedback. The most mentioned challenge was low survey return rate. Return rates of less than 10% were commonly mentioned, with some lower than 2%. Respondents provided a range of suggestions for why response rates may be low, including survey fatigue (receiving so many survey requests that motivation to respond declines), limited time available to respondents to prioritize survey completion, and students being difficult to contact because they have not been issued a government email address, have multiple government email addresses, or work within a security environment where survey invitations are blocked. Resourcing was another challenge mentioned by several respondents. For example, it was not always possible to identify time or expertise for developing an in-house external survey. Similarly, one respondent mentioned that new or evolving PME requirements may add the need for new targeted evaluations but without those requirements being formally resourced. Ideas on how to improve the representativeness of data collection included considering mechanisms for reducing survey fatigue, exploring alternative survey implementation platforms that may reduce security interference, and finding ways to increase leadership involvement in the feedback process to make it a higher priority for graduates.

2. Linkable feedback. The second challenge to the utility of the external survey feedback was whether data provided feedback that could be linked to educational measures. Respondents questioned whether the “right” questions were always asked. For example, while the required QAO questions probe critical issues about course efficacy, they may be too general to provide feedback that course managers or curriculum developers can use to update course materials. One respondent noted it is critical that surveys are designed with improvement goals in mind, so it is clear how survey results do or do not provide evidence of improvements or declines in course qualities. A related challenge was that, when asking about specific skills and abilities learned in the course, evaluators face the challenge that graduates may have had highly variable experiences after leaving the course. Respondents suggested that in addition to asking graduates about their skill proficiency, it is important to probe whether the skill is or has been relevant for their duties. Likewise, a challenge in asking leaders for feedback is that evaluators do not always know if current leaders are commanding recent graduates or if they have the relevant expertise to evaluate graduates’ competency in specific skills and abilities. In response to the limitations of using survey data alone, several respondents reported ideas for alternative sources or formats of feedback. These ideas were aimed at improving the usefulness of evaluation feedback as well as addressing the difficulties of data collection. First, respondents suggested methods for improving feedback from installations such as creating tiger teams or appointing responsible personnel at installations who could identify recent graduates and gather feedback; having someone in an installation who could track graduates would also aid in identifying relevant leaders at the same installation. Second, several respondents suggested the potential for gathering relevant feedback during training events at CTCs. A school/COE representative could ask targeted questions above and beyond the measures already recorded at the training event, which could provide targeted feedback that could be directly related to educational aims and objectives. Third, some respondents indicated it would be beneficial to have knowledge of and access to existing data sources. Existing or planned sources of data, such as data that may become available as part of the Integrated Personnel and Pay System-Army, could provide external feedback without necessitating additional data collection. Finally, one respondent suggested a novel means of obtaining operational “feedback” by increasing the proportion of military (versus civilian) instructors to bring recent operational experience back to the educational environment.

3. Actionable feedback. A third set of challenges highlighted the question of how and whether feedback could be actioned, including whether there is a well-established flow of response data to relevant stakeholders. This factor had variable impact on respondents, with some describing explicit infrastructure for feedback to both be reported (e.g., to course managers) and to be applied (e.g., during AARs); in contrast, some respondents expressed concerns that feedback may need to be “pushed” to relevant stakeholders and may or may not be used consistently.

Summary of Feedback

Discussions with QAO representatives across a range of schools and COEs revealed that as well as gathering feedback on the required external survey questions, there is a diverse range of approaches used to acquire external feedback on how educational outcomes are realized in the operational environment. Many schools add targeted questions to the required questions to achieve more actionable feedback for curriculum improvement. In response to a core challenge of low response rates for graduates and their supervisors, institutions have turned to a convergent approach, utilizing several methods for obtaining feedback from the operational environment. As one respondent suggested, the external surveys serve as just one piece of a feedback puzzle.

While this is summary is not an exhaustive survey of external feedback from either QAOs or other sources (such as in-house evaluations), the experience and expertise gathered from the participating representatives provides critical considerations in developing and implementing an external evaluation of the C5. These considerations are discussed in the next section.

Developing C5 External Evaluations: Lessons Learned and Recommendations

The main goal of the C5 external evaluation is to establish predictive links between PME and the operational environment. This requires external measures that are quantitative, can be gathered systematically so they are representative, and can be demonstrated to link meaningfully to specific PME goals. Based on discussions with QAO representatives, a successful C5 external evaluation should address key challenges.

1. Improve representativeness by addressing low response rate. Compared to school-specific evaluations, the C5 evaluation can partially mediate the concerns of poor return rates because the common core is taught enterprise-wide and has an annual graduate sample of over 8,000 per year. Even a return rate of 3%–5% would provide 200–400 respondents. However, subsetting the data to examine the variability in responses across schools/COEs, components (active duty, Army Reserve, and National Guard), or specific classes would reduce sample size accordingly. Thus, plans for a C5 external survey should consider suggestions from the QAO respondents to address apathy and survey fatigue, including making surveys short and convenient to take.

2. Make data linkable by considering performance measures. Many respondents reported getting graduate performance measures such as from the Center for Army Lessons Learned following training events at CTCs. While respondents indicated that these additional measures were sought to compensate for low survey return rates, operational performance data could be more informative than survey feedback if it could be directly linked to performance measures from PME. However, QAO respondents highlight challenges in using performance measures, reporting that feedback from training events may be too general and schools/COEs rarely have representatives there to ask targeted questions.

Using operational performance measures for the C5 evaluation presents a data collection challenge. Just as some schools used existing measures from CTC training events, one possibility is to identify and evaluate existing professional products that reflect the performance of C5 skills, such as writing samples that can be evaluated with reference to CCC communication instruction. If extant products are not available, an alternative is to develop new C5-related performance measures and administer them to graduates. As highlighted by QAO respondents, performance measures have the potential to provide more direct and relevant feedback than survey evaluations but come with significant challenges in data access (for existing measures) or collection (for novel measures).

In response to the challenges of data collection, many QAO representatives reported using a convergent approach to external feedback, supplementing survey data with feedback from other sources, such as leaders who are participating in educational programs or graduates completing CTC events. This convergent feedback helps overcome the low return rates from external surveys and difficulties of acquiring performance data, as well as providing a range of data types for consideration. Taking a convergent approach could provide benefits to a C5 external evaluation by diversifying the available types of data. However, there are also disadvantages to a convergent approach: first, accessing and analyzing multiple data sources increases the required resources, and second, using several smaller diverse datasets will make it difficult to establish quantifiable links between educational and operational measures.

3. Develop actionable questions by considering stakeholders. Respondents emphasized the challenges in making sure that questions are actionable by ensuring that there is a pathway for data to flow from the survey back to key stakeholders. C5 instruction covers general, doctrinally based topics and is taught in a variety of contexts at the schools and COEs. It may therefore be a difficult challenge to develop feedback questions that are concrete enough to be used in curriculum development but general enough to be asked of graduates across the schools/COEs. This challenge may mean that an effective C5 external evaluation will require an iterative process to develop measures which can both identify general targets for improvement and account for the range of post-graduate experience, such as whether graduates have had opportunities to apply what they learned.

Relevance of C5 External Evaluations for Other Army-Wide Initiatives

Establishing links between C5 educational and operational measures supports the goals of the OBME approach across the Army learning enterprise, and effectively evaluating C5 modernization can support the evaluation of other modernization efforts. Moreover, developing direct educational-operational links contributes to the establishment of a learning ecosystem, a continuum of diverse, flexible, and lifelong learning. The learning ecosystem is a key component of the vision of the future of Army education laid out in The Army Learning Concept for 2030–2040 (TRADOC, 2024a; Walcutt & Schatz, 2019).

Directly measuring the impact of PME on operational success is critical for demonstrating that PME is achieving its purpose. However, as we see from the example of C5, developing effective external evaluations is challenged by the lack of both a data infrastructure and a participation culture to ensure representative feedback. It is beyond the scope of this article to provide specific recommendations for these broad issues, but the input received suggests three general considerations:

  1. Action is needed to reduce survey fatigue and other barriers to providing feedback. Because the survey burden builds cumulatively, efforts need to be centralized to consider the total feedback requirement on students and graduates, while still allowing individual institutions to gather targeted feedback flexibly.
  2. A culture of participation needs to be encouraged, for example by communicating to leaders, students and graduates how their feedback is used to improve curriculum and how improving PME will benefit them.
  3. Getting useful feedback requires resources. While time and money are at a premium, the inefficiencies inherent in collecting imprecise or unusable data must be considered. Moreover, investing in data collection that provides useable feedback may save resources downstream by optimizing the outcomes of PME.

Summary and Conclusions

There are significant challenges to establishing reliable predictive relationships between PME outcomes and operational performance. However, understanding how PME impacts readiness is not only important for C5 modernization. This issue sits at the center of Army-wide initiatives to institute OBME across PME, increase data-centric approaches to curriculum development, and establish a learning ecosystem that supports a continuum of career-long learning.

This article represents a small corner of these broader issues, gathering lessons from QAO efforts across the CCC schools/COEs that can be used to design effective C5 external feedback. The approaches at the different schools and COEs provide invaluable insight into both potential approaches and the pitfalls and challenges of gathering external feedback.

The conceptual links between this effort and broader Army-wide initiatives highlight the need for this bottom-up effort to be met with top-down leadership involvement. The ability to reliably acquire external feedback requires developing a culture where participants understand how improving PME provides Army-wide benefits, and a data collection infrastructure that can support the data-driven goals for Army modernization.

 

The views expressed in this article are those of the author and do not reflect the official policy or position of the U.S. government, the Department of Defense, the U.S. Army, or Army University.


References

Brooks, C., & Thompson, C. (2017). Predictive modelling in teaching and learning. In C. Lang, G. Siemens, A. Wise, & D. Gasevic (Eds.), Handbook of learning analytics (pp. 61–68). Society for Learning Analytics and Research.

Chairman of the Joint Chiefs of Staff. (2020). Outcomes-based military education procedures for officer professional military education (CJSC Manual 1810.01). https://www.jcs.mil/Portals/36/Documents/Library/Manuals/CJCSM%201810.01.pdf

Eldeen, A. I. G., Abumalloh, R. A., George, R. P., & Aldossary, D. A. (2018). Evaluation of graduate students’ employability from employer perspective: Review of the literature [Special issue]. International Journal of Engineering & Technology, 7(2.29), 961–966. https://doi.org/10.14419/ijet.v7i2.29.14291

Ellinger, E., & Posard, M. (2023). Imagining the future of professional military education in the United States: Results from a virtual workshop. RAND. https://www.rand.org/pubs/conf_proceedings/CFA2148-1.html

Fortuna, E. J. (2023). Modernizing the U.S. Army’s captains career course. Journal of Military Learning, 7(1), 16–26. https://www.armyupress.army.mil/Journals/Journal-of-Military-Learning/Journal-of-Military-Learning-Archives/Conference-Edition-2023-Journal-of-Military-Learning/Captains-Career-Course/

Magallanes, E. (2019). Construction and validation of outcomes-based work text in calculus. WVSU Research Journal, 8(1), 1–11. https://journal.wvsu.edu.ph/index.php/journals/article/view/15/17

Othman, W. N. A. W., Abdullah, A., & Romli, A. (2020). Predicting graduate employability based on program learning outcomes. 2020 IOP Conference Series: Materials Science and Engineering, 769(1), Article 012018. https://www.doi.org/10.1088/1757-899X/769/1/012018

Rao, N. J. (2020). Outcome-based education: An outline. Higher Education for the Future, 7(1), 5–21. https://doi.org/10.1177/2347631119886418

Schreurs, S., Cleutjens, K. B., & Cleland, J. (2020). Outcomes-based selection into medical school: Predicting excellence in multiple competencies during the clinical years. Academic Medicine, 95(9), 1411–1420. https://www.doi.org/10.1097/ACM.0000000000003279

Shafto, M. & Lauer, S. (2023). Evaluation of the captains career course common core modernization quarterly report FY23 Q1– FY24 Q1 (October 2022–December 2023). Army University.

U.S. Army Training and Doctrine Command. (2014). TRADOC implementation of the Army quality assurance program (TRADOC Regulation 11-21). U.S. Government Publishing Office. https://adminpubs.tradoc.army.mil/regulations/TR11-21.pdf

U.S. Army Training and Doctrine Command. (2021). Training and educational development in support of the institutional domain (TRADOC Pamphlet 350-70-14). U.S. Government Publishing Office. https://adminpubs.tradoc.army.mil/pamphlets/TP350-70-14.pdf

U.S. Army Training and Doctrine Command. (2024a). The Army learning concept for 2030–2040 (TRADOC Pamphlet 535-8-2). U.S. Government Publishing Office. https://adminpubs.tradoc.army.mil/pamphlets/TP525-8-2.pdf

U.S. Army Training and Doctrine Command. (2024b). Army quality assurance program procedures (TRADOC Pamphlet 11-21). U.S. Government Publishing Office. https://adminpubs.tradoc.army.mil/pamphlets/TP11-21.pdf

Vandergriff, D. E. (2010). Today’s training and education (development) revolution: The future is now! (Land Warfare Paper No. 76). Association of the United States Army, Institute of Land Warfare. https://www.ausa.org/sites/default/files/LWP-76-Todays-Training-and-Education-Development-Revolution-The-Future-is-Now.pdf

Walcutt, J. J., & Schatz, S. (Eds.). (2019). Modernizing learning: Building the future learning ecosystem. U.S. Government Publishing Office. https://adlnet.gov/publications/2019/04/modernizing-learning/

 

Note

1 This article uses the term “evaluation” per TRADOC Regulation 11-21 (2014), “A systematic, continuous process to appraise the quality (or determine the deficiency), efficiency and effectiveness of a program, process or product. It provides the mechanism for decision makers to assure quality” (p. 15). Typical evaluations for educational courses and programs include using surveys or similar formats to garner feedback from key stakeholders including current students, graduates, leaders, or instructors.

 

Dr. Meredith Shafto is a research psychologist at the Institutional Research and Assessment Division, Vice Provost of Academic Affairs, Army University. She has a PhD in cognitive psychology and uses evidence-based approaches to improve educational practices across the learning enterprise through a range of collaborative projects.

Back to Top

February 2025