A Choice to Lead
Generative AI in Army PME
Lt. Col. Peyton Hurley, U.S. Army
Download the PDF 
The release of ChatGPT in November 2022 thrust generative AI (GenAI) into the hands of the everyday user by transforming a technical capability into a public force. For the military, it marked a point of no return: AI would now shape the professional development of its leaders, whether they were ready or not. While algorithms had long influenced battlefield systems, GenAI made those capabilities visible and accessible to nonexperts. Now, anyone could generate essays, synthesize research, or simulate adversaries in seconds. As Dr. James Lacey, a professor at Marine Corps University, notes, military education institutions are still catching up, implementing inconsistent policies that leave students navigating ambiguous, sometimes contradictory rules around AI use.1
Thomas Dohmke, CEO of GitHub, captured the urgency: organizations must decide “which side of the productivity polarity” they want to be on.2 In fields from finance to software development, GenAI is already automating tasks and reshaping knowledge work. The Army must evolve too, but its adoption remains uneven and scattered; operational units are experimenting and strategic leaders are drafting policy. One critical domain, however, is being underutilized: professional military education (PME).
PME blends doctrine and debate, planning and pedagogy, simulation and scholarship. Most importantly, it operates in a largely unclassified environment, making it an ideal venue for GenAI experimentation. At institutions like the Command and General Staff College (CGSC), students already perform tasks—research, writing, war-gaming—that map directly to GenAI’s capabilities. Yet as a RAND study on PME observes, PME often lags the operational force it supports.3
This article argues that PME, especially Army University, should lead the U.S. Army’s adoption of GenAI by building student literacy. This is not just about improving education but changing how the Army thinks, plans, and leads in an information age shaped as much by the systems that interpret and analyze data as by the data itself. Caution has served its purpose. Leadership must now come from within the educational enterprise.
Across the civilian world, GenAI is already transforming how knowledge work gets done. Companies that hesitated during past technological shifts (e.g., Blockbuster with streaming, Kodak with digital photography, BlackBerry with smartphones) were quickly overtaken. The risks for militaries are greater. France built its defenses around the Maginot Line in 1940, preparing for the last war while the next arrived by maneuver and air. The Army cannot afford to treat GenAI as a curiosity or wait for perfect policy. The time to adapt is before the next fight begins.
The GenAI Landscape: Opportunity, Uncertainty, and Urgency
GenAI refers to a class of models capable of producing novel outputs (e.g., text, images, code) based on large datasets and probabilistic reasoning. Unlike earlier AI, which focused on classification and prediction, GenAI creates. These tools can draft analysis, summarize documents, simulate adversaries, and even brainstorm plans. But while their outputs appear fluent, they lack understanding. GenAI is not intelligent; it is predictive. It excels at mimicking thought but not reasoning. That duality makes GenAI both powerful and risky. Models like GPT-5 can synthesize doctrine or generate war-game scenarios in seconds, but they can also hallucinate facts, embed bias, or reinforce flawed assumptions. These tensions are well-documented in civilian sectors, where GenAI has prompted new debates about authorship, trust, and professional standards. Education researchers describe GenAI as both “friend and foe,” democratizing access to knowledge while potentially undermining critical thinking.4
In the Department of Defense, these tensions are increasingly operational. Launched in 2023, Task Force Lima was charged with assessing GenAI’s utility and risk.5 The Army and other services first deployed internal models like CamoGPT and NIPRGPT, and commercial tools like AskSage; all tools designed for secure use but based on limited or earlier-generation architectures. By contrast, commercial systems are evolving rapidly, with superior capabilities in planning, analysis, and simulation. This gap matters. More recently, the Department of Defense (the Department of War since 2025) deployed GenAI.mil, a promising step that combines the latest model from Google with the enhanced security of an impact level 5 (IL-5) environment. GenAI.mil will expand to include the latest from Grok, Claude, and OpenAI’s GPT-5. To build real proficiency, PME students need access to the models they are most likely to encounter in the broader information environment and in future operational contexts.
What they need, above all, is GenAI literacy, not just familiarity with tools but the judgment to use them well. Researchers Ravinithesh Annapureddy et al. define twelve competencies that form the foundation of GenAI literacy, ranging from prompt design and output assessment to ethical reasoning and bias detection.6 The University of Florida’s “AI Across the Curriculum” initiative offers a broader, five-part model that emphasizes adaptability and ethical awareness.7 Both frameworks highlight the need to prepare users to shape, interrogate, and supervise GenAI, not just consume its outputs.
These capabilities map closely to PME’s intellectual mission. Staff officers must synthesize information, evaluate options, and communicate decisions; the kinds of tasks GenAI can accelerate or distort. Literacy, not automation, is the goal. PME is where officers must learn how to think with GenAI, not through it.
Why PME Is the Ideal Venue
PME is uniquely positioned to lead GenAI integration. Unlike operational units, which must prioritize readiness and security, PME operates in a largely unclassified environment centered on intellectual development. It blends doctrine and debate, writing and planning, and simulation and scholarship. This makes it an ideal space to explore GenAI in a structured, low-risk setting.
At institutions like CGSC, officers already perform tasks (i.e., research, planning, analysis, wargaming) that align with GenAI’s strengths. These activities offer natural opportunities to explore GenAI tools in support of decision-making and synthesis. However, GenAI is not about doing less work, it is about doing more with speed, scale, and precision. In future operational environments, fluency with AI tools will not be an advantage but a baseline expectation. Since much of the material in PME is open-source, experimentation avoids the classification and operational security concerns that constrain many other Army organizations.
Experimentation is only meaningful if students can engage with the tools shaping the information environment they operate in. Today’s most capable GenAI models that are developed by commercial industry offer reasoning, adaptability, and planning support well beyond the scope of older or simplified systems. These tools are influencing workflows in policy, intelligence, and global decision-making contexts. Students only working with legacy models or models deployed on insufficient compute infrastructure may meet security requirements, but they will leave PME unfamiliar with the GenAI ecosystems shaping operational, policy, and intelligence environments. PME should not just simulate future tools, it should teach students how to navigate them.
This is not a call for unbounded access. This is a call for PME leadership to pursue updated policy and appropriate tools such as GenAI.mil that enable the responsible use of cutting-edge, commercial-grade models for education and planning. The Army cannot afford to shield its future leaders from the very technologies transforming staff work, operational design, and strategic communication. Security policy should protect but not prevent progress.
PME also offers a wide variety of media and learning formats like essays, briefs, debates, decision exercises, and group war games, which can all be augmented by GenAI. These formats allow students to engage with AI not just as users but as analysts and skeptics. Instructors can assign GenAI-supported tasks that require students to critique outputs, compare versions, or refine machine-generated drafts.
Education scholars argue that this collaborative model where AI functions as a codesigner of thinking is where GenAI has the most value.8 Caitlin Bentley et al. advocate for responsible, institutionally governed GenAI education focused on output supervision and ethical use.9 The University of Florida’s framework similarly emphasizes “enabling AI,” which is treating AI tools as part of the learning process rather than as threats to it.10 These models support PME’s mission while preparing officers for AI-augmented staff work.
Finally, PME is intellectually postured to explore GenAI’s deeper implications. Some scholars describe GenAI as part of a “hypercommons,” a domain where authorship, originality, and epistemological authority become uncertain.11 These are not abstract issues. They are central to how military professionals frame narratives, assess sources, and lead in contested information environments. PME is where officers can safely interrogate these questions through red teaming, simulations, and structured critique.
GenAI is not just a tool; it is a shift in how knowledge is generated, evaluated, and acted upon. PME offers the Army its best chance to lead that shift—deliberately, critically, and with discipline.
Typology of Army GenAI Use Cases
To integrate GenAI responsibly into PME, faculty and students need more than general awareness. They need a structured understanding of where and how GenAI is most likely to appear in military workflows. What kinds of tasks can GenAI support? Which functions align with officer responsibilities at the field-grade level? Where are the risks? The potential gains?
To answer these questions, I studied and developed a tailored typology of GenAI use cases specific to Army contexts (see table 1). While commercial firms like Deloitte have offered extensive GenAI portfolios across industries (e.g., finance, healthcare, human resources), few reflect the realities of military staff work.12 Similarly, Department of Defense initiatives like Task Force Lima tend to focus on acquisition and policy rather than day-to-day planning and analysis.13 A typology built for PME must reflect doctrinal tasks, staff processes, and educational priorities.
Table 1 presents twelve functional categories of GenAI use, ranging from summarization and planning to adversary simulation and influence modeling.14 Each use case is described by purpose and mapped against a “likelihood of use” score (1–4) that estimates how frequently Command and General Staff Officer Course (CGSOC) graduates might encounter or benefit from that function in their next assignment.
This typology offers two immediate benefits to PME. First, it helps instructors and curriculum designers target the most relevant GenAI functions by focusing effort on areas where literacy matters most. Already core components of staff education, planning assistance, doctrinal summarization, administrative support, and scenario development all rank highly. GenAI can support them without fundamentally altering instructional goals.
Second, the typology highlights how GenAI enables functions that exceed the current curriculum such as red-teaming adversary narratives or generating synthetic media. These applications open new opportunities for PME innovation but require careful ethical and security guardrails. They may not be common today, but they reflect a rapidly approaching future.
Not all use cases are equally relevant to all officers. Technical functions like fine-tuning models or writing code are specialized. The overall pattern is clear: GenAI maps onto core workflows in staff planning, information synthesis, and communication. PME does not need to guess where GenAI might apply. The patterns are already visible.
GenAI Literacy Requirements and Competency Gaps
Understanding where GenAI applies is only half the equation. Equally important is knowing what users need to know. As GenAI tools become more accessible, the real differentiator is not access; it is literacy. Officers must be able to interact with models intelligently, critique outputs, identify limitations, and apply results within doctrinal and ethical boundaries. This requires a defined set of cognitive and procedural competencies, skills that can be taught, practiced, and assessed in PME.
To guide this effort, my research adapted Annapureddy et al.’s GenAI literacy framework, which identifies twelve core competencies required to use GenAI tools responsibly and effectively.15 These competencies range from technical skills like prompt engineering and output assessment to conceptual abilities like ethical reasoning, bias detection, and knowledge of GenAI’s limitations. Each competency was evaluated against the GenAI use cases outlined previously to produce a prioritized view of which competencies are most relevant to Army officers at the CGSOC level.
Table 2 reveals three clusters of relevance:
- Baseline competencies. These are essential for all users. They include understanding how GenAI works, its capacity and limits, and its ethical implications. These competencies form the foundation of GenAI literacy and scored highest across almost all use-case categories.
- Practical use competencies. Skills such as prompt engineering and output evaluation are crucial for officers who will regularly engage with GenAI in planning, analysis, or communication tasks. These are teachable in PME and align well with current classroom activities.
- Specialized or low-relevance competencies. Skills like programming models or fine-tuning architectures are important but less applicable for most field-grade officers. These are more relevant for technical military occupational specialties or roles within AI development pathways.16
This prioritization supports targeted curriculum development. PME does not need to cover all twelve competencies equally. It should focus on those that prepare students to use GenAI as a judgment aid and not a decision substitute. The goal is not to turn officers into machine-learning experts but to equip them to supervise, shape, and critique the outputs they will increasingly rely on.
GenAI is no longer optional in the information environment, but effective use is not intuitive. Literacy (structured, deliberate, and mission-aligned) is what makes GenAI operationally relevant. And PME is where that literacy should begin.
Curriculum Integration Feasibility in CGSOC
If GenAI use cases are clearly identifiable and the necessary competencies can be defined, the next logical step is whether PME can integrate GenAI instruction without disrupting the core mission. At CGSOC, the answer is yes. GenAI integration is not only feasible but also strategically aligned with the course’s existing educational practices and learning objectives.
My research evaluated CGSOC’s curriculum through three lenses: instructional activities, terminal learning objectives, and suitability of GenAI for educational enrichment. Across all three, the results suggest GenAI can be incorporated in targeted, low-disruption ways that enhance rather than replace core tasks.17
Many CGSOC assignments such as producing estimates, analyzing historical campaigns, drafting orders, or developing briefings map directly to high-probability GenAI use cases like doctrinal summarization, planning support, and information synthesis. Instructors can use GenAI to augment these tasks without compromising educational outcomes. For instance, students might compare a GenAI-generated enemy course of action to their own, critique AI-generated campaign analyses, or iterate written products with AI feedback, all while demonstrating command of the underlying material.
Several terminal learning objectives are especially well suited for integration. Learning objectives related to critical thinking, decision-making, and warfare in the strategic environment are particularly ripe for AI support. These objectives require cognitive flexibility, pattern recognition, and conceptual synthesis; the very areas where GenAI can offer support without automating judgment. Machines, including AI, partner with humans; they do not replace them. Instructors can design assignments where GenAI is used as a thought partner, not a crutch.
My research also presented a challenge–opportunity–threat framework for key CGSOC learning objectives. The takeaway is clear: when GenAI is introduced in ways that preserve decision ownership, encourage skepticism, and require output validation, it should enhance the learning experience. When it is used in ways that shortcut thinking or conceal knowledge gaps, it undermines it. Instructional design is the key variable, not the tool itself.
This insight aligns with broader research. Education scholars like Thomas Chiu argue that GenAI is most effective when integrated across four domains: learning, teaching, assessment, and administration.18 In practice, this means GenAI can support both student-facing tasks (e.g., writing, research, simulation) and instructor tasks (e.g., rubric design, scenario scaffolding). Similarly, You Jeen Ha et al. recommend embedding GenAI into feedback loops—using it to provide second-draft critique or generate alternative frames that challenge student assumptions.19
At CGSOC, GenAI can be introduced incrementally, beginning with faculty-led demonstrations, guided comparison exercises, and optional enhancements to planning products. Over time, these practices can evolve into more formalized modules, ensuring students graduate not just having used GenAI but having reflected on what it means to use it responsibly. The School of Advanced Military Studies is already doing this for academic year 2025–26.20 Their implementation includes formal instruction, practical application, and continued use throughout the academic year.
The infrastructure is already in place. Intellectual habits such as critical inquiry, structured debate, and doctrinal analysis are already being cultivated. GenAI is not a replacement for these habits but a tool to sharpen them.
What Leadership Looks Like: The Case for Army University
If PME is the right venue for GenAI experimentation, then Army University is the right institution to lead it. The university sits at the nexus of doctrine, curriculum, faculty development, and talent management. It possesses the authority and structure to shape how GenAI is introduced across the Army’s educational enterprise, from CGSOC to the NCO Leadership Center of Excellence.
Currently, GenAI adoption across the Army is fragmented with tactical units experimenting and some PME faculty testing GenAI in lesson design or research. But in the absence of clear guidance or educational doctrine, usage remains informal, inconsistent, and often hidden. Army University can change that by moving PME from isolated exploration to structured innovation.
One recent initiative illustrates both its progress and limitation. Brig. Gen. Jason Rosentrauch, deputy commandant of CGSC, described the Army’s development of an AI-powered tutor built on CamoGPT in a recent article.21 The system reflects meaningful institutional curiosity and demonstrates that Army University is willing to explore GenAI. However, experimentation is not enough as the tutoring system remains limited in scope, deployment, and ambition. It does not teach students how GenAI works, how to evaluate its outputs, or how to apply it in operational settings; rather, it prioritizes delivery over literacy. More importantly, there is little indication that the Army has the computing infrastructure to deploy such tools at scale across PME. A single AI assistant is not a strategy. If Army University intends to lead, it must institutionalize GenAI literacy, not just pilot GenAI tools.
To do so, Army University should focus on three core functions:
- Build smart modules. Create plug-and-play GenAI lessons that fit into existing PME courses. Think guided prompts, AI versus human planning drills, and red-team exercises using synthetic content.
- Train the trainers. Give instructors what they need: policy that allows for innovation, hands-on practice, and examples of GenAI done right. Help them lead from the front, not lag behind their students.
- Unlock the right tools. Build a policy framework that lets PME access top-tier commercial models. No one learns to fight using a simulator from the last war. GPT-5, Claude 4.5, and Gemini 3 Flash reflect the tools shaping real-world planning and information operations. Training only on stripped-down models builds blind spots, not judgment. The state of the art evolves weekly, and PME needs the freedom to keep up. Waiting for policy to catch up is not a leadership position; asserting the right to lead within it is.
Each of these efforts reinforces Army University’s broader mission: to prepare the force intellectually for operational complexity. RAND has noted that PME is often slow to innovate, even when individual educators are willing to experiment.22 Army University can close that gap by creating an institutional structure that enables bottom-up initiative to be scaled, evaluated, and shared.
This leadership role is not just bureaucratic, it is doctrinal. As new forms of AI-enabled staff work, planning, and influence emerge, the Army must develop a shared professional ethic around their use. What counts as original thought? How do officers attribute or critique machine-generated products? These are not just technical questions. They are questions of military professionalism. With its reach across PME, Army University is best positioned to lead that conversation.
Moreover, GenAI is not limited to student tasks. It can enhance faculty workflows as well: generating case studies, modifying lesson plans, or creating role-based simulations for design exercises. The same prompt strategies that improve student learning can help faculty teach more efficiently and creatively. Leadership means modeling the behaviors we ask students to adopt.
In short, Army University has the reach, structure, and credibility to act. Leadership is not declared but rather demonstrated. GenAI is moving fast as are civilian institutions. If Army University does not lead, the force will be shaped by fragmented initiatives, uneven standards, and digital divides. This is the moment to lead, and the best-positioned institution to do so already exists.
Addressing Ethical and Security Concerns
Every conversation about GenAI in military education must address ethics and security. Critics rightly worry about plagiarism, data leakage, model bias, and overreliance. But avoiding these challenges altogether would be a mistake, and more generative AI literacy—not less—is the solution. PME, more than any other Army environment, is where these concerns should be addressed directly.
Academic integrity is often the first worry. If students use GenAI to write essays or briefings, how can instructors assess original thought? This risk is not unique to GenAI but a broader question of authorship in the digital age. Tools already exist to guide ethical use, from citation conventions to task design. The key is to shift from evaluating outputs alone to assessing process: how students refine, critique, and build on GenAI responses.
Ethical complexity goes further. The “hypercommons” where authorship, originality, and authority blur raises real questions for military education.23 When is it acceptable to use machine-generated plans or narratives? What constitutes informed judgment in an age of predictive text?
Model bias is another concern. Commercial GenAI systems have been shown to generate racially or culturally biased outputs.24 In the military, this could have reputational or operational consequences—particularly in influence operations or civil-military relations. PME can confront this risk head-on through red teaming, bias-detection exercises, and structured critique, all components of GenAI literacy.
Security concerns are valid, especially regarding prompt data retention and unintentional exposure of sensitive information. Most PME content is drawn from unclassified, open-source material, and with appropriate guidance and platform controls, students can explore GenAI without violating operational security. Tools such as GenAI.mil exist in IL-5 environments mitigating much of the security risk.
The right response is not prohibition but education. PME can model ethical GenAI use, foster critical literacy, and train officers to supervise AI-generated content with skepticism and care. There is no better environment in the Army to develop these habits and no better time to begin.
The Risk of Inaction: A Leadership Vacuum
While the case for GenAI integration in PME is strong, the consequences of inaction are even more compelling. GenAI is already being used across the force by staff officers drafting briefs, instructors refining lesson plans, and students completing assignments. Much of this use is ad hoc, informal, and unregulated. Without guidance, norms will form through habit and improvisation, not professional standards.
This vacuum is not sustainable. Waiting for perfect policy or enterprise-wide integration means forfeiting the chance to shape how GenAI is used responsibly. RAND has warned that innovation in PME often stalls not for lack of talent or opportunity but due to institutional inertia.25 If Army University does not lead, others will fill the gap, often without the same alignment to Army values or educational rigor.
As recent CGSOC students Majs. Patrick Kelly and Hannah Smith argue, “Educators in PME should embrace opportunities to lead in this space.”26 Doing so will ensure that GenAI use and literacy development aligns with Army values. That leadership must come from within.
These tools are already in use. The risk is not in trying them but failing to train officers in how to use them well. PME’s challenge is not GenAI’s arrival; it is whether we choose to engage it with discipline or let it evolve by default.
Lead Now, or Follow Later
Generative AI is no longer speculative. It is reshaping how information is produced, how decisions are supported, and how organizations learn. For the Army, the challenge is not whether GenAI will become part of its intellectual ecosystem but whether the institution will shape that integration deliberately or let it unfold unevenly, without deep intellectual foundations. This is not just about tools; it is about risk. If PME does not lead, the Army risks ceding the shape of GenAI integration to improvisation, fragmentation, and external forces. Strategic adaptation begins with educational initiative.
PME is the Army’s best venue to lead this transition. It offers a low-risk environment, a mission aligned with critical thinking, and a structure capable of shaping officer judgment at scale. However, leadership requires more than permission, it requires initiative. Army University must pursue the authorities, tools, and curriculum changes that will allow it to train the force with state-of-the-art capabilities, not legacy constraints.
Waiting for policy to catch up is not leadership. Creating the conditions to move responsibly within policy is. The tools exist, the opportunity is clear, and the venue is ready. The Army needs leadership, and Army University is best positioned to provide it. The question is not whether to lead but whether to lead now.
The author used generative AI to help outline, draft, revise, and edit this article.
Notes 
- James Lacey, “Peering into the Future of Artificial Intelligence in the Military Classroom,” War on the Rocks, 3 April 2025, https://warontherocks.com/2025/04/peering-into-the-future-of-artificial-intelligence-in-the-military-classroom/.
- Will Knight, “AI Is Rewiring Coders’ Brains. Yours May Be Next,” Wired, 8 February 2024, https://www.wired.com/story/fast-forward-ai-rewiring-coders-brains-github-copilot/.
- Charles A. Goldman et al., Intellectual Firepower: A Review of Professional Military Education in the U.S. Department of Defense (RAND Corporation, January 2024), https://www.rand.org/content/dam/rand/pubs/research_reports/RRA1600/RRA1694-1/RAND_RRA1694-1.pdf.
- Enkeledja Kasneci et al., “GPT for Good? On Opportunities and Challenges of Large Language Models for Education,” Leaning and Individual Differences 103 (April 2023): 2, https://doi.org/10.1016/j.lindif.2023.102274; Weng Marc Lim et al., “Generative AI and the Future of Education: Ragnarök or Reformation? A Paradoxical Perspective from Management Educators,” International Journal of Management Education 21, no. 2 (July 2023): 3, https://doi.org/10.1016/j.ijme.2023.100790.
- Deputy Secretary of Defense, memorandum, “Establishment of Chief Digital and Artificial Intelligence Officer Generative Artificial Intelligence and Large Language Models Task Force, Task Force Lima,” U.S. Department of Defense, 10 August 2023, https://media.defense.gov/2023/Aug/10/2003279040/-1/-1/1/ESTABLISHMENT_OF_CDAO_GENERATIVE_AI_AND_LARGE_LANGUAGE_MODELS_TASK_FORCE_TASK_FORCE_LIMA_OSD006491-23_RES_FINAL.PDF.
- Ravinithesh Annapureddy et al., “Generative AI Literacy: Twelve Defining Competences,” Digital Government: Research and Practice 6, no. 1 (February 2025): 6–7, https://doi.org/10.1145/3685680.
- Jane Southworth et al., “Developing a Model for AI Across the Curriculum: Transforming the Higher Education Landscape via Innovation in AI Literacy,” Computers and Education: Artificial Intelligence 4 (2023): 4–7, https://doi.org/10.1016/j.caeai.2023.100127.
- Caitlin Bentley et al., “A Framework for Responsible AI Education: A Working Paper,” 17 August 2023, https://ssrn.com/abstract=4544010.
- Bentley et al., “A Framework for Responsible AI Education.”
- Southworth et al., “Developing a Model for AI Across the Curriculum.”
- Gazi Islam and Michelle Greenwood, “Generative Artificial Intelligence as Hypercommons: Ethics of Authorship and Ownership,” Journal of Business Ethics 192 (July 2024): 660, https://doi.org/10.1007/s10551-024-05741-9.
- Deloitte AI Institute, The Generative AI Dossier: A Selection of High-Impact Use Cases Across Six Major Industries (Deloitte AI Institute, 3 April 2023), 5, https://www2.deloitte.com/content/dam/Deloitte/th/Documents/deloitte-consulting/generative-AI-dossier.pdf.
- Task Force Lima, “Task Force Lima Use Case Tracker,” accessed November 25, 2024, https://qlik.advana.data.mil/sense/app/0b2bf41c-7013-4cd7-8246-7e9cebe1360b/sheet/a18a89a7-298b-4d40-8290-efa9ab210ee8/state/analysis (CAC required).
- Peyton Hurley, The Right Side of the Productivity Polarity: Integrating Generative AI into Professional Military Education (monograph, School of Advanced Military Studies [SAMS], May 2025).
- Annapureddy et al. “Generative AI Literacy.”
- Hurley, The Right Side of the Productivity Polarity.
- Hurley, The Right Side of the Productivity Polarity.
- Thomas K. F. Chiu, “The Impact of Generative AI on Practices, Policies, and Research Direction in Education,” Interactive Learning Environments, no. 10 (August 2023): 4–7, https://doi.org/10.1080/10494820.2023.2253861.
- You Jeen Ha et al., “Exploring the Impacts of Generative AI on the Future of Teaching and Learning,” Berkman Klein Center for Internet and Society, 31 May 2023, https://cyber.harvard.edu/story/2023-06/impacts-generative-ai-teaching-learning.
- The author helped design the SAMS AI curriculum in April 2025.
- Jason H. Rosentrauch, “Reimagining Army Education,” CGSC Foundation News, no. 36 (Spring 2025): 9, 14, https://www.cgscfoundation.org/wp-content/uploads/2025/07/FoundationNews-No36-Spring2025.pdf.
- Goldman et al., Intellectual Firepower.
- Islam and Greenwood, “Generative Artificial Intelligence as Hypercommons.”
- Jeremy Baum and John Villasenor, “Rendering Misrepresentation: Diversity Failures in AI Image Generation,” Brookings Institution, 17 April 2024, https://www.brookings.edu/articles/rendering-misrepresentation-diversity-failures-in-ai-image-generation/.
- Goldman et al, Intellectual Firepower.
- Patrick Kelly and Hannah Smith, “How to Think About Integrating Generative AI in Professional Military Education,” Military Review Online Exclusive, 23 May 2024, https://www.armyupress.army.mil/Journals/Military-Review/Online-Exclusive/2024-OLE/Integrating-Generative-AI/.
Lt. Col. Peyton Hurley, U.S. Army, is the G-2 for 1st Armor Division at Fort Bliss, Texas, and is a recent graduate of the School of Advanced Military Studies. During his career, Hurley has served in a variety of assignments from company to three-star headquarters, including two assignments as an observer coach/trainer at the Joint Multinational Readiness Center and the Joint Readiness Training Center.
Back to Top