An Eye for AI
Integrating Generative AI Imagery in Graduate PME
Luke M. Herrington and Jacob A. Mauslein
School of Advanced Military Studies, Army University
Download the PDF 
Abstract
Artwork produced by artificial intelligence (AI) can be highly beneficial in the classroom because visual aids play an important role in information retention and critical thinking, allowing for deeper learning experiences. Moreover, generative AI imagery provides educators with the flexibility to create custom-tailored illustrations to meet exact needs for concrete experiences, lectures, and presentations. Finally, it can stimulate independent learning when used by students to further explore AI capabilities. A series of examples drawn from the authors’ experiences in graduate seminars demonstrate how AI generated art can enhance learning and create a more interactive environment. We conclude with a brief discussion of how the utilization of AI-generated imagery could help mitigate the risk of inadvertently infringing on copyrighted material.
Although art generated by artificial intelligence (AI) represents a technologically novel outlet for human creativity, visual communication has long featured as a prominent aspect of human social interaction. Images created using generative AI tools like DALL-E 3, Imagen, or Midjourney are the next step in visual communication. Generative AI-based illustrations may soon become a prominent feature in classrooms, boardrooms, and briefing rooms. Users who employ these tools could thus be put into advantageous positions going into their future professions. Given this, learning to communicate effectively with AI-generated imagery is an important skill that students, educators, and other professionals must develop in the classroom.
This article explores the integration of AI-generated imagery in lectures and curriculum development. Much that has been written on AI in the classroom focuses on identifying the (mis)use of AI in student writing (Kelly & Smith, 2024). Pew finds that 35% of high school educators believe AI does more harm than good (Lin, 2024). Yet, according to Hopelab et al. (2024), only 24% of students aged 14–22 associate AI with cheating; and according to Digital Education Council (2025), 61% of higher education faculty use it for teaching. Regardless, the plausibility of employing AI-generated imagery in lectures or other classroom activities, especially those geared toward a military audience, remains understudied. Addressing this gap is necessary to understand technology adoption by civilian and military educators and how it positively influences student learning.
Inquiry into the impact of imagery on learning and information retention has a storied lineage. Visual storytelling can be traced to cave paintings at least 51,200 years old (Oktaviana et al., 2024; Sullivan, 2024). With the advent of writing only around 5,000 years ago, it is clear, as Medina (2009) asserts, that “our evolutionary history was never dominated by text” (p. 79).
From the fifth to fourth centuries BCE, Socrates, Plato, and Aristotle held varying perspectives about imagery and aesthetics. Although Socrates and Plato were critical of image use in pursuit of knowledge (Janaway, 1995), Aristotle recognized a linkage between art and the search for truth (Heidegger, 1977). Meanwhile, the Chinese adopted the Confucian idiom that “hearing something a hundred times, isn’t better than seeing it once” (Poplin, 2024, p. 53), while Napoleon is credited for bringing a similar expression to Europe: “Un bon croquis vaut mieux qu’un long discours” [a good sketch is better than a long speech] (So, 2010, para. 4). Both phrases are apparently precursors to the modern advertising industry cliché that a picture is worth a thousand words (O’Toole, 2022; Safire, 1996).
Modern education continues linking visual communication and learning. Books designed for childhood audiences often connect visuals to narration (Bender, 1944; Smith et al., 2021; Strasser & Seplocha, 2007). Similarly, textbooks throughout primary and secondary education—sometimes followed even into graduate and postgraduate settings—rely heavily on the use of imagery to convey meaning and supplement text (Adams, 1974; George, 2002; Moore, 1999). Despite some student and faculty cynicism about the idiosyncratic use of visual elements in the classroom (George, 2002), visuals “deepen examinations of individuals, dynamics, and institutions” (Kelly & Sihite, 2018, p. 77).
The same is true for the military. Not only do maps link modern soldiers with Roman legionnaires (Dilke, 1987), but the U.S. Army has also long employed visual aids—photos, film, etc.—in training and educational material. For example, the Army’s use of cartoons and comics during World War II met with success especially in training and educating nonliterate and non-English speaking men who required instruction prior to entering basic training (Witty, 1944). Today, the experiential learning model employed in professional military education (PME) likewise incorporates images throughout the learning process. This is particularly useful when establishing concrete learning, conceptualizing abstract ideas, and promoting active experimentation (Pierson, 2017; Risner & Ward, 2014; see also Kolb, 2014). Traditionally, such visuals are limited to photographs, maps, paintings, and other publicly accessible media. These, however, may not be fully representative of the material they are designed to illustrate, forcing educators to rely on visuals based on availability rather than applicability to learning objectives. Incorporating new technologies into curriculum development may remove these limitations.
In the next section, we survey the literature on visual learning to develop our argument that artwork produced by AI can be highly beneficial in the classroom. A series of examples demonstrate how AI-generated art used in practical exercises can enhance learning and create a more interactive environment.1 We conclude with a brief discussion of ethical quandaries raised by AI diffusion models and how the utilization of AI-generated imagery could help mitigate the risk of inadvertently infringing on copyrighted material.2
Literature Review
Diverse literature exists on the intersection of classroom education and the use of images. This literature spans from biology to business, and although adults learn differently from children, similar findings emerge in both pedagogical and andragogical research (Arneson & Offerdahl, 2018; Cassara, 2021; Garcia-Retamero & Cokely, 2017; Jaekel, 2018; Jenkinson, 2018; Kelly & Sihite, 2018; Pierson, 2017; Roam, 2013). While research focused on civilian classrooms is the norm, many observations are also relevant for military classrooms. As Goldman et al. (2024) note, “Military educational institutions share many features with civilian institutions in terms of … the methods they use for teaching” (p. vii). Further, Murray (2014) concludes, “The way the human brain learns does not change because someone has put on a uniform. To argue otherwise would, simply put, be absurd” (para. 5). In short, the skills associated with visual expertise remain amenable to sharpening through learning or training, regardless of context, because visual learning remains a property of the brain (Lu & Dosher, 2022).
Despite its neurological underpinning, valid and easy-to-understand data on visual learning are difficult to come by. Questionable statistics—for example, that 90% of all information processed by the brain is processed visually or that 65% of the general population are visual learners—are rooted in commercial advertising (Coffman, 2021; Gossman, 2013). Nevertheless, many students do indeed consider themselves to be visual learners, meaning they prefer visual aids, such as graphs and charts, to traditional lectures (Fleming, 1995). Perhaps because the brain can detect an image in 13–100 milliseconds (Potter, 2014), many students also perceive multimedia-based instruction as more effective, leading to improved course evaluations (Latz & Rogers, 2018) and possibly even increased registration rates and lower drop rates (Samaniego, 2022).
Further, incorporating photographs, artwork, and other pictures into lecture material and active-learning exercises can lead to increased student engagement, increased (and equitable) student participation, and enhanced learning with respect to diversity (Jaekel, 2018; Kaur, 2021; Kędra & Žakevičiūtė, 2019; Roberts, 2019). Clark (2008) found that the variety offered by visual aids stimulates students’ interest in learning. Additionally, images are suggested to promote student empathy and emotional intelligence in some contexts (Clark, 2008; Gil-Glazer, 2015; Latz & Rodgers, 2018).
The use of “literal,” “figurative,” and “paradoxical” imagery is positively correlated with active learning in large lectures (Roberts, 2018, p. 232; see also Roberts, 2019, p. 63). Literal images are simple representations that illustrate unfamiliar concepts. Figurative images are more metaphorical, eliciting analogical associations between concepts or other images to help students connect the unfamiliar with the already known. Finally, paradoxical images are self-contradictory, provoking confusion and attempts at reconciliation. They puzzle students and thus cue deeper engagement with the material (Roberts, 2019).
Additionally, research suggests that pairing pictures (or other multimedia) with lecture slides can reduce cognitive load. The use of fewer words on a slide makes it easier for students to process oral information since both are processed in the same area of the brain. Visual data, on the other hand, is processed by a different region of the brain, so the overuse of text can be supplemented with visual aids that do not require the same cognitive effort to understand (Horvath, 2014; Jordan et al., 2020; Mayer & Moreno, 2003).
Memory also benefits from the integration of text and images. It takes 1–10 seconds for an image to be stored in long-term memory, and only three seconds to be remembered in detail (Potter, 2014). Despite this remarkable processing speed, images should still be related to the substantive content and not used merely as “filler” (Prosser, 2011; Read & Barnsley, 1977; Roediger, 1990; Schneider et al., 2020). Filler images—unlike literal, metaphorical, or paradoxical images—display details unrelated to learning outcomes while failing to stimulate discussion. These passive or “seductive” details may be interesting, but they have been found to increase cognitive load, in turn reducing their efficacy (Jaekel, 2018; Lehman et al., 2007; Metros, 2008; Park et al., 2015; Rey, 2011). This is significant, for while the use of programs like Microsoft PowerPoint has received intense scrutiny in the U.S. Army and Central Intelligence Agency, they remain endemic in the national security community generally and PME in particular (Burke, 2015; Davis, 2023; Gobry, 2017; Ricks, 2014). Reducing cognitive load could help students, faculty, and other security professionals absorb slide material more effectively.
Finally, effective engagement with images in the classroom, including the description, interpretation, analysis, evaluation, and synthesis of visual data, can promote critical thinking by sharpening students’ visual literacy. Visually literate students are equipped not just to understand images or other media but to think about how others perceive, interpret, and learn from them. They also promote critical thinking by stimulating reflection on vital concepts external to the image on one hand, while encouraging students to look for internal meaning on the other. Students can also think and communicate directly with pictures. Filler images are still not enough though; good graphics stimulate active learning by prompting students to seek deeper meanings. Careful observation and recording of visual information lie at the heart of scientific observation, intelligence analysis, and other types of learning; used right, good graphics also encourage self-directed knowledge production. In short, the effective integration of images in the classroom can promote visual literacy and critical thinking through application at every level of Bloom’s (1956) taxonomy of educational objectives (Chapman & McShay, 2018; Cordell, 2016; Dupré, 2011; Elkins, 2007; Margolis & Zunjarwad, 2018; Roam, 2013; Roberts, 2019; Thompson, 2019).
Ultimately, visual literacy represents an especially valuable skill now that the proliferation of misleading imagery is no longer limited to Adobe Photoshop. In an era when AI-generated images can create misperceptions of terror attacks and undermine the economy, or when deepfake video technology can mimic a presidential address, the visual literacy component of students’ critical thinking skills should be foregrounded in the classroom (Gstalter, 2018; Marcelo, 2023).
To that end, a small corpus on the use of generative AI art in the classroom has started proliferating across social media, blogs, and even in some peer-reviewed publications (Cooper & Tang, 2024; Obeta, 2024; Rubman, 2024; Vartiainen & Tedre, 2023), but students remain less skilled at its use. According to a Hopelab et al. (2024) report, where 51% of young people aged 14–22 report using AI, only 31% report familiarity with image generation tools. PME faces a bigger lacuna. Although some military thinkers have suggested the use of AI in the after-action review process and the creation of scenario development, we are aware of none to date that have participated in the emerging conversation on the use of generative AI art in PME or other military contexts (Cates et al., 2022; Coombs, 2024). Perhaps biased by Western pop culture’s generally negative portrayal of AI, most interest in generative AI in the PME and civilian context has instead focused on its abuse in student writing (Cox, 2018; Kelly & Smith, 2024). Despite the skeptical focus AI has received in some venues or the complete absence of study in others, AI-generated images can contribute to and foster an engaging PME environment.
Incorporating AI-Generated Imagery into PME
Guided by the benefits discussed previously, faculty in the Advanced Military Studies Program at the School of Advanced Military Studies (SAMS) designed practical exercises that use AI to enhance students’ experiences with image generation and critical thinking using programs like OpenAI’s DALL-E 3 and Google’s Imagen. Students have also used AI independently to enhance classroom comradery and visualize abstract concepts. This section details classroom-related experiences drawn from four SAMS seminars during the academic years (AY) 24 and 25.
Data Literacy Lectures
During a series of AY25 SAMS lectures on data literacy and research design, we used AI-generated images to gently introduce students to the essence of quantitative and qualitative research methodology. Students could be introduced to complicated quantitative concepts by diving directly into topics like the presentation of complex numerical data in spreadsheets, regressions, or the analysis of patterns in figures. Alternatively, they could be introduced to qualitative methods through stuffy lectures on the complexities of interview question design, transcription, or content analysis. For students who are years removed from having written a graduate-level paper, this sort of conversation could be daunting or frustrating. Thus, we introduced the concepts of quantitative and qualitative research by using more simplified but still accurate visual representations of each.
Two customized AI-generated images were created to achieve the above goals. We designed one literal and one metaphorical image to introduce the topics and reduce unnecessary wording on our lecture slides. A common theme was chosen for the two images: science fiction, a genre often included in book recommendations for armed forces members (Guldin, 2024; Mills & Heck, 2023). Additionally, as noted by Cavanaugh (2018), George Lucas’ Star Wars is a reliable frame of reference for many in the military because the franchise is commonly known (p. xiv). Gene Roddenberry’s Star Trek was also chosen as it is another widely known science fiction franchise among the PME population (Whitman Cobb, 2021).
First, quantitative data was depicted literally using AI-generated imagery of a bureaucrat in a gray suit, reminiscent of an Imperial officer from Star Wars, explaining the results of a line chart to a room of equally blandly dressed bureaucrats (see Figure 1). The lecturer explained the image as an analysis of declining employee morale after a significant, negative event took place. In the context of the AI-generated image, the lecturer used the example of the Death Star explosion in Star Wars: A New Hope (Lucas, 1977) as the cause of the declining morale. This AI-generated image was tailored with a specific context and purpose in mind: introduce quantitative methods (the line chart), analysis of quantitative data (declining employee morale), a context in which quantitative methods could be used (ways of making decisions related to complex problems), and all through a commonly known film franchise.
To visualize qualitative research methods, a metaphorical AI-generated image was created using a Star Trek-oriented theme. As shown in Figure 2, an image was generated that illustrated an individual in a red shirt speaking to a therapist. The image was created to convey the Star Trek trope that a “red shirt” crew member was sent on away missions with principal members of the USS Enterprise bridge crew, never to return to the ship. The lecturer explained that the therapist and the red-shirted individual were having a conversation—the collection of qualitative data—regarding the patient’s feelings about their upcoming away mission.
In both instances, due to concerns about copyright in the creation of an AI-generated image, no references to specific characters from either franchise were used in the prompt nor depicted on the slide. For example, the image of the statistician lecturing the gray-suited bureaucrats could have been substituted with a more whimsical image of Darth Vader lecturing a group of storm troopers on laser blaster marksmanship and efficiency. However, it is unclear if this sort of unlicensed—albeit original—use of a copyrighted character would constitute fair use or raise copyright questions (Piltch, 2024; Thompson, 2024). Similarly, the patient in the qualitative methods prompt was described as “a man wearing a red shirt” than merely as a “red shirt.” This stimulated creative prompt engineering on the instructor’s part to marry the theme with the desired lesson outcome. Further, the descriptions still satisfied the purpose of creating a customized image, and they did so without direct reference to the corresponding film franchises or television series.
Ultimately, the AI-generated lecture images satisfied many of the benefits found in the image literature previously covered. These included a reduction in the number of words on the slide, an increased emphasis on the lecturer’s oratory, and serving a functional purpose rather than being mere space filler. In addition to prompting laughter from the students, which has shown to be an effective way of boosting information comprehension (Hackathorn et al., 2011), the two AI-generated images allowed for an accessible and customized visual representation of complex methodologies. They were generated to the exact parameters of the delivering lecturer and framed around the common theme of science fiction.
Anchoring Limitations
One issue of concern identified when working specifically with DALL-E 3 was anchoring, or a priming effect wherein an AI model—like a person—can become attached to initial information that disproportionately impacts future output (Kahneman, 2011; Nguyen, 2024). While preparing a lecture on qualitative data and research design, we generated an image of a graying professor in a tweed jacket working with statistics to serve the same purpose as the illustrations discussed above. Although we liked our professor enough to name him Joel, this seductive detail was discarded as it served no purpose other than to function as slide filler. The lecturer then attempted to generate a more meaningful image. After planning a lesson using AI artwork to illustrate abstract concepts (e.g., empathy, constructivism, postmodernism, morality) for another class, the lecturer decided to illustrate the concept of “qualitative.”
Interestingly, DALL-E 3 anchored on the idea of Joel the statistician and subsequently produced four pictures of “qualitative” that all inaccurately featured bar graphs, pie charts, and other quantitative data visuals. One image even produced a chemist in a lab coat. Similarly, when the lecturer requested an image of Carl von Clausewitz, DALL-E 3 remained anchored on the idea of Joel, and produced a portrait of the professor in 1800s-era Prussian military garb as if he were Clausewitz. It almost goes without saying that Joel von Clausewitz was discarded just as our depictions of “qualitative” had been. Even so, he is worth noting, as educators wishing to integrate generative AI artwork into their own classes may experience similar phenomena.
Data Visualization
During the data literacy lesson of an AY25 theory course, students in two seminars participated in a practical exercise exposing them to the Correlates of War (COW) data project (Correlates of War Project, 2024). Instructions for this active-learning exercise required students to develop a graphical and tabular representation of the data with which they were working. Both seminars completed this task after being divided into four groups. Students in one of the seminars were also given the opportunity to experiment with generative AI to develop a third visualization. Using their data, they were asked to generate visual depictions of a given country’s history of conflict. Three did so successfully in the time allotted, while the fourth ran out of time.
The first group worked with the COW Formal Alliance (Version 4.1) dataset to depict the United Kingdom’s history of conflict in two different ways (Gibler, 2009, 2013). First, using a text-to-image generator (DALL-E 3), they ended up with a pseudo-political map of the world highlighting geographic regions mostly associated with the British Empire. As with ChatGPT’s written claims, students using text-to-image generators must not assume that AI artwork is always capable of depicting information accurately, because the map included several anachronisms. However, even inaccurate visuals can promote visual literacy and critical thinking. This map prompted a thorough discussion about the reliability of large language models (LLMs) and diffusion models like DALL-E 3.
The group’s second visual also provided a useful teaching point. Using an LLM like ChatGPT, they developed a hierarchical organizational chart depicting Britain’s alliance memberships. Unlike their map, which included indecipherable text, the organizational chart included readable, accurate text. Because LLMs are trained on massive text-based datasets, they have a far superior ability to recognize and communicate with textual information than do diffusion models, which are generally trained on visual datasets (HDSICOMM, 2025; Shipps, 2024).
The second group worked with COW’s World Religion (Version 1.1) dataset to explore India’s history of conflict (Maoz & Henderson, 2013a, 2013b). Like the first group, they provided two visualizations. The image generator first attempted to illustrate a bar graph overlaid with a trend line, but it showed an assortment of Indian and Pakistani soldiers in lieu of bars. The quantitative “data” conveyed in the image appeared to have been meaningless, but the unique illustration hints at the potential generative AI art may have to create powerful visuals in the future if the relevant diffusion models or LLMs are updated to accurately depict quantitative data. The group’s second illustration was more abstract. It showed the people of India and Pakistan divided by a stark line separating them militarily and politically as well as culturally and economically. Although a less effective visual in terms of representing data, the image would have been quite powerful had it not been for the diffusion model’s confusing decision to depict Pakistan as a purely maritime power while representing India as a purely land power.
A visualization created by the final group included an artistic rendering Paraguayan conflicts (see Figure 3). This would have been an effective visualization had it not included a number of anachronisms. For example, the AI-generated depiction of the War of the Triple Alliance (1864–1870) included dragons, cars, and modern weapons. Where most of the images presented by the group were illustrative images, pictures that include anachronisms such as these can be thought of as a kind of paradoxical image (Roberts, 2019). The jarring incongruities stimulated discussion. As with the map example noted above, the portrait of the War of the Triple Alliance became the basis of a discussion on AI reliability, and the fact that diffusion models—like their LLM counterparts—are capable of “hallucination,” or the presentation of false, unsubstantiated, or nonsensical information mistakenly fabricated by the AI as true (Farquhar et al., 2024). Additionally, while we remain unaware of any potential sources for such confusion, the inclusion of the dragons stimulated further, independent learning, as this raised questions about whether the AI could have been confused by potential dragon-related historical analogies, metaphors, or the like raised in connection with the War of the Triple Alliance.
Student-Led Innovation
Students seem to represent the leading edge of experimentation with generative AI in PME. For instance, they use the programs playfully to create memes, team logos, and classroom mascots. In one memorable AY24 example, students generated an image of an anthropomorphized fox in military regalia that parodied Berlin’s (1953) concept of broad-minded foxes and their more narrow-minded hedgehog counterparts (see Figure 4). The woodland two-star general that resulted was then identified as the ideal military planner based on arguments put forward in Tetlock and Gardner’s Superforecasting (Berlin, 1953; Epstein, 2019; Tetlock & Gardner, 2015; see also Schwartz, 1996). In addition to depicting course concepts in easily digestible ways, AI was also used to satire course readings, highlight confusion, and express frustrations with difficult texts.
Meanwhile, in his AY24 Advanced Military Studies Program monograph, Haydock (2024) used DALL-E 3 to depict several novel ways in which a military could make use of existing technology to execute its deception theory and deception doctrine. One image included in his monograph depicts the use of commercial airliners to airdrop troops. Another demonstrates the use of container ships as mobile helicopter pads. Finally, a third illustrates the use and grounding of ferries modified to serve as amphibious roll-on/roll-off vehicle transports capable of landing tanks. The images included in his monograph provide effective illustrations of otherwise theoretical ideas with few real-world analogues (Haydock, 2024). Students in another AY24 seminar took Haydock’s approach a step further. Tasked with conceptualizing new technologies that do not yet exist, several similarly employed generative AI to illustrate their ideas.
Considerations for Implementation
Based on our experience, educators considering the use of AI-generated imagery in their classrooms have a host of issues to consider. First, they need to avoid using filler images; AI-generated images should have substantive meaning related to learning outcomes. Second, educators should exercise caution when working with copyrighted or trademarked material. Third, they need to be mindful of the limitations (e.g., anchoring, hallucination) of AI. Additional considerations might include being mindful of institutional policy, ensuring at least one student in an assigned group is already familiar with AI, and maintaining a repository where images and prompts can be easily shared and accessed. Finally, comparing different services and discussing the process can be just as valuable as creating the product.
Moving Forward
While AI-generative art is often associated with malign activities such as the creation and dissemination of deep fake videos, it is clear it can be used in the classroom to students’ collective benefit. Weaving images produced by AI into a curriculum can allow for deeper learning experiences. However, because the benefits of leveraging artificially produced artwork have received such scant attention, numerous questions remain about how to effectively employ resources like DALL-E 3, Imagen, Stable Diffusion, or Midjourney in the classroom.
Although visual aids have merit in both PME and civilian educational environments, and AI-generated artwork in particular can be leveraged in both pedagogical and andragogical approaches to teaching, it stands to reason that the best ways to use images effectively will vary across generational cohorts, classrooms, disciplines, and other factors. Some common findings from the literature may guide educators at any level. It would behoove a grade school teacher to avoid “seductive details,” including images used only as filler, just as it would a graduate level PME instructor. However, a high school teacher’s current events class is likely to use AI artwork in a way quite different from an undergraduate political science class. The former might use AI simply to depict current events to promote thoughtful discussion, while the latter might leverage AI artwork to promote a suite of skills related to media literacy, visual literacy, and critical thinking. This could include illustrating abstract ideas (e.g., justice), demonstrating the potential of AI to mislead voters, or engagement with deepfakes.
Other factors likely to impact the effective integration of AI-generated images into a given course include instructor comfort level, individual and institutional hesitancy, and different levels of technical proficiency among students and faculty. As Kelly and Sihite (2018, p. 77) note, “How students learn depends largely on such contextual factors as how a teacher teaches.” Yet, these conjectures remain hypotheses to be tested with longitudinal data, survey methods, and statistical modeling. Further research into AI-generated art must also identify the conditions that best promote student learning, including carefully controlled blind experiments capable of testing the effects of different uses of AI art on student learning.
Conclusion
Our current efforts aim only to probe the plausibility of integrating images produced by AI in the classroom. As such, our goal is to stimulate further creative thinking about how to use AI in the classroom while hinting at some of the benefits working with these images may present. For example, the use of AI-generated visual aids has, in our experience at SAMS, stimulated continued, independent learning in ways consistent with the adult learning model employed by faculty at the Command and General Staff College. If findings from the literature continue to bear out, AI-generated images could also play an important role in information retention when purposefully employed in ways linked to student learning objectives. Moreover, AI-generative imagery provides educators and other presenters with the flexibility to customize visuals to suit their unique content requirements. Illustrations can be custom-tailored for any presentation. Whether for faculty lectures, student presentations, military briefs, or something else entirely, AI art can meet a speaker’s exact needs (so long as he or she gets their prompt right).
For faculty, students, or others who pull their visual aids directly from Google Images or similar services, the use of AI-generated imagery could potentially also mitigate the risk of inadvertently infringing on copyrighted material since programs like DALL-E 3 typically produce royalty-free content. Copying pictures archived on searchable databases like Google Images into a lecture is considered fair use by some institutions including, for example, the libraries at the University of North Carolina Chapel Hill (UNC University Libraries, n.d.). Others maintain that using such pictures would still require the lecturer to obtain the appropriate permissions (Copyrightlaws.com, 2022). At the Command and General Staff College, policy mandates that fair use determinations come from the Office of the Judge Advocate General (Martin, 2020).
Like the use of images that have entered the public domain or works published under a Creative Commons license, students and faculty working in PME can simplify this process by integrating AI-generated artwork into their classes since this material cannot be copyrighted (Brittain, 2023). This would also avoid creating additional work for the Office of the Judge Advocate General. Admittedly, however, users would still need to grapple with the ethical quandaries raised by using potentially copyrighted material in the initial training of the AI. Since these issues are still working their way through the courts though, we do not consider them further here (Belanger, 2024; Brittain, 2024; Edwards, 2023; see also Murray, 2023).
Ultimately, the use of AI-generated imagery in the classroom has a multitude of benefits for educators and students; these benefits compound and directly translate to a strengthening of the operational force. As educators and students are further exposed to AI, they gain an increased understanding of a vital technology being incorporated into more and more military systems. This reality is playing out on the battlefields of Ukraine and Russia as well as in Israel and Gaza. It is also a fundamental element of disinformation campaigns. Additionally, AI and AI-generated imagery prompt critical thinking—a trait required of all PME graduates. The importance of these skills cannot be overstated, and any way in which educators can promote this technology in the classroom will be a benefit to students and the institutions they serve.
Disclaimer
Opinions, conclusions, and recommendations expressed or implied are solely those of the authors and do not necessarily represent the U.S. School of Advanced Military Studies, the U.S. Army Command and General Staff College, the U.S. Army, or any other U.S. government agency; references to specific platforms and software are not intended as either endorsement or promotion.
Notes
- The methods described in this article were developed prior to the December 2024 release of Combined
Arms Center Command Policy Letter 24 (Combined Arms Center Guidance on Generative Artificial
Intelligence [GenAI] and Large Language Models [LLM]).
- Where generative AI models refer to computer systems that use massive preexisting datasets and
predictive algorithms to produce content like text in response to a user prompt (Kelly & Smith, 2024),
diffusion models are a subset of generative AI that refine randomized data to produce structured visual
output like an image (HDSICOMM, 2025).
References
Adams, W. C. (1974). Introductory American government textbooks: An anatomical analysis. PS: Political Science & Politics, 7(3), 260–261. https://doi.org/10.2307/418148
Arneson, J. B., & Offerdahl, E. G. (2018). Visual literacy in Bloom: Using Bloom’s taxonomy to support visual learning skills. CBE—Life Sciences Education, 17(1), 1–8. https://doi.org/10.1187/cbe.17-08-0178
Belanger, A. (2024, August 14). Artists claim “big” win in copyright suit fighting AI image generators. Ars Technica. https://arstechnica.com/tech-policy/2024/08/artists-claim-big-win-in-copyright-suit-fighting-ai-image-generators/
Bender, L. (1944). The psychology of children’s reading and the comics. Journal of Educational Sociology, 18(4), 223–231. https://doi.org/10.2307/2262695
Berlin, I. (1953). The hedgehog and the fox. Weidenfeld and Nicolson.
Bloom, B. S. (Ed.). (1956). Taxonomy of educational objectives: The classification of educational goals. David McKay Company.
Brittain, B. (2023, August 21). AI-generated art cannot receive copyrights, US court says. Reuters. https://www.reuters.com/legal/ai-generated-art-cannot-receive-copyrights-us-court-says-2023-08-21/
Brittain, B. (2024, April 29). Google sued by US artists over AI image generator. Reuters. https://www.reuters.com/legal/litigation/google-sued-by-us-artists-over-ai-image-generator-2024-04-29/
Burke, C. (2015, March 3). 5 ways not to bore your troops to death with PowerPoint. Task & Purpose. https://taskandpurpose.com/tech-tactics/5-ways-not-bore-troops-death-powerpoint/
Cassara, M. (2021). Concept mapping: An andragogy suited for facilitating education of the adult millennial learner. In A. Fornari & A Poznanski (Eds.), How-to guide for active learning (pp. 67–83). Springer.
Cates, K., Banghart, M., & Plant, A. (2022). Improving after action review (AAR): Applications of natural language processing and machine learning. Journal of Military Learning, 6(1), 3–14. https://www.armyupress.army.mil/Journals/Journal-of-Military-Learning/Journal-of-Military-Learning-Archives/April-2022/Cates-Action-Review/
Cavanaugh, M. L. (2018). Preface. In M. Brooks, J. Amble, M. L. Cavanaugh, & J. Gates (Eds.), Strategy strikes back: How Star Wars explains modern military conflict (pp. xiii–xvii). Potomac Books.
Chapman, N. H., & McShay, J. (2018). Digital stories: A critical pedagogical tool in leadership education. In B. T. Kelly & C. A. Kortegast (Eds.), Engaging images for research, pedagogy, and practice: Utilizing visual methods to understand and promote college student development (pp. 135–149). Routledge.
Clark, J. (2008). PowerPoint and pedagogy: Maintaining student interests in university lectures. College Teaching, 56(1), 39–44. https://doi.org/10.3200/CTCH.56.1.39-46
Coffman, V. (2021, September 28). Visual learner: Characteristics, study tips, and activities. Homeschool Blog. https://blog.bjupress.com/blog/2021/09/28/visual-learner-characteristics/
Coombs, R. A. (2024, May 15). AI integration for scenario development: Training the whole-of-force. Military Review. https://www.armyupress.army.mil/Journals/Military-Review/Online-Exclusive/2024-OLE/AI-Integration-for-Scenario-Development/
Cooper, G., & Tang, K.-S. (2024). Pixels and pedagogy: Examining science education imagery by generative artificial intelligence. Journal of Science Education and Technology, 33(4), 556–568. https://doi.org/10.1007/s10956-024-10104-0
Copyrightlaws.com. (2022, November 14). 6 best practices for legally using Google Images. https://www.copyrightlaws.com/copyright-tips-legally-using-google-images/
Cordell, D. M. (2016). Using images to teach critical thinking skills: Visual literacy and digital photography. Libraries Unlimited.
Correlates of War Project. (n.d.). About the correlates of war project. Retrieved 23 July 2025 from https://correlatesofwar.org/
Cox, D. G. (2018). Chinese advantages in the development and integration of artificial intelligence and warfare. InterAgency Journal, 9(3), 49–56.
Davis, E. (2023, October 17). The need to train data-literate U.S. Army commanders. War on the Rocks. https://warontherocks.com/2023/10/the-need-to-train-data-literate-u-s-army-commanders/
Digital Education Council. (2025, January 20). Digital education council global AI faculty survey 2025. https://www.digitaleducationcouncil.com/post/digital-education-council-global-ai-faculty-survey
Dilke, O. A. W. (1987). Maps in the service of the state: Roman cartography to the end of the Augustan era. In J. B. Harley & D. Woodward (Eds.), The history of cartography: Vol. 1: Cartography in prehistoric, ancient, and medieval Europe and the Mediterranean (pp. 201–211). University of Chicago Press.
Dupré, R. E. (2011). Guide to imagery intelligence. The Intelligencer: Journal of U.S Intelligence Studies, 18(2), 61–63.
Edwards, B. (2023, March 22). Ethical AI art generation? Adobe Firefly may be the answer. Ars Technica. https://arstechnica.com/information-technology/2023/03/ethical-ai-art-generation-adobe-firefly-may-be-the-answer/
Elkins, J. (2007). Introduction: The concept of visual literacy, and its limitations. In J. Elkins (Ed.), Visual literacy (pp. 1–9). Routledge.
Epstein, D. (2019). Range: Why generalists triumph in a specialized world. Riverhead.
Farquhar, S., Kossen, J., Kuhn, L., & Gal, Y. (2024). Detecting hallucinations in large language models using semantic entropy. Nature, 630, 625–630. https://doi.org/10.1038/s41586-024-07421-0
Fleming, N. D. (1995). I’m different; not dumb: Modes of presentation (VARK) in the tertiary classroom. Research and Development in Higher Education, Proceedings of the 1995 Annual Conference of the Higher Education and Research Development Society of Australasia (HERDSA), 18, 308–313.
Garcia-Retamero, R., & Cokely, E. T. (2017). Designing visual aids that promote risk literacy. Human Factors, 59(4), 582–627. https://doi.org/10.1177/0018720817690634
George, D. (2002). From analysis to design: Visual communication in the teaching of writing. College-Composition and Communication, 54(1), 11–39. https://doi.org/10.58680/ccc20021473
Gibler, D. M. (2009). International military alliances (pp. 1648–2008). CQ Press.
Gibler, D. M. (2013). Formal alliances (Version 4.1) [Data set]. Correlates of War. https://correlatesofwar.org/data-sets/formal-alliances/
Gil-Glazer, Y. (2015). Photography, critical pedagogy, and “difficult knowledge.” International Journal of Education Through Art, 11(2), 261–276.
Gobry, P.-E. (2017, January 17). General Mattis, save the U.S. military. Ban PowerPoint. The Week. https://theweek.com/articles/673091/general-mattis-save-military-ban-powerpoint
Goldman, C. A., Mayberry, P. W., Thompson, N., Hubble, T., & Giglio, K. (2024). Intellectual firepower: A review of professional military education in the U.S. Department of Defense. RAND.
Gossman, K. (2013). Beware: You can’t trust the internet. EVG Media. https://evgmedia.com/you-cant-trust-the-internet/
Gstalter, M. (2018, April 17). “Obama” voiced by Jordan Peele in PSA video warning about fake videos. The Hill. https://thehill.com/blogs/in-the-know/in-the-know/383525-obama-voiced-by-jordan-peele-in-psa-video-warning-about-fake/
Guldin, J. (2024, March 27). 17 books every service member should read, according to troops and veterans. Military.com. https://www.military.com/off-duty/books/17-books-every-service-member-should-read-according-troops-and-veterans.html
Hackathorn, J., Garczynski, A. M., Blankmeyer, K., Tennial, R. D., & Solomon, E. D. (2011). All kidding aside: Humor increases learning at knowledge and comprehension levels. Journal of the Scholarship of Teaching and Learning, 11(4), 116–123. https://scholarworks.iu.edu/journals/index.php/josotl/article/view/1837/1834
Haydock, T. L. (2024). Defeating deception: Outthinking Chinese deception in a Taiwan invasion [Unpublished master’s monograph]. School of Advanced Military Studies.
HDSICOMM. (2025, April 2). Expanding the use and scope of AI diffusion models. University of California, San Diego. https://datascience.ucsd.edu/expanding-the-use-and-scope-of-ai-diffusion-models/
Heidegger, M. (1977). The question concerning technology and other essays. Harper & Row.
Hopelab, Common Sense Media, & the Harvard Center for Digital Thriving. (2024, June 3). Teen and young adult perspectives on generative AI: Patterns of use, excitements, and concerns. https://digitalthriving.gse.harvard.edu/wp-content/uploads/2024/06/Teen-and-Young-Adult-Perspectives-on-Generative-AI.pdf
Horvath, J. C. (2014). The neuroscience of PowerPoint™. Mind, Brain, and Education, 8(3), 137–143. https://doi.org/10.1111/mbe.12052
Jaekel, K. S. (2018). Pedagogical strategies for developing visual literacy through social justice. In B. T. Kelly & C. A. Kortegast (Eds.), Engaging images for research, pedagogy, and practice: Utilizing visual methods to understand and promote college student development (pp. 105–118). Stylus.
Janaway, C. (1995). Images of excellence: Plato’s critique of the arts. Oxford University Press.
Jenkinson, J. (2018). Molecular biology meets the learning sciences: Visualizations in education and outreach. Journal of Molecular Biology, 430(21), 4013–4027. https://doi.org/10.1016/j.jmb.2018.08.020
Jordan, J., Wagner, J., Manthey, D. E., Wolf, M., Santen, S., & Cico, S. J. (2020). Optimizing lectures from a cognitive load perspective. AEM Education and Training, 4(3), 306–312. https://doi.org/10.1002/aet2.10389
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus & Giroux.
Kaur, A. W. (2021). “Dope syllabus”: Student impressions of an infographic-style visual syllabus. International Journal for the Scholarship of Teaching and Learning, 15(2), 1–16. https://doi.org/10.20429/ijsotl.2021.150206
Kędra, J. & Žakevičiūtė, R. (2019). Visual literacy practices in higher education: what, why and how? Journal of Visual Literacy, 38(1-2), 1–7. https://doi.org/10.1080/1051144X.2019.1580438
Kelly, B. T., & Sihite, E. U. (2018). Overview of the use of visual methods in pedagogy. In B. T. Kelly & C. A. Kortegast (Eds.), Engaging images for research, pedagogy, and practice: Utilizing visual methods to understand and promote college student development (pp. 77–90). Stylus.
Kelly, P., & Smith, H. (2024, May 23). How to think about integrative generative AI in professional military education. Military Review. https://www.armyupress.army.mil/journals/military-review/online-exclusive/2024-ole/integrating-generative-ai/
Kolb, D. A. (2014). Experiential learning: Experience as the source of learning and development. Pearson Education.
Latz, A. O., & Rodgers, K. L. (2018). Photovoice and visual life writing. In B. T. Kelly & C. A. Kortegast (Eds.), Engaging images for research, pedagogy, and practice: Utilizing visual methods to understand and promote college student development (pp. 91–104). Stylus.
Lehman, S., Schraw, G., McCrudden, M., & Hartley, K. (2007). Processing and recall of seductive details in scientific text. Contemporary Educational Psychology, 32(4): 569–587. https://doi.org/10.1016/j.cedpsych.2006.07.002
Lin, L. (2024, May 15). A quarter of U.S. teachers say AI tools do more harm than good in K-12 education. Pew Research Center. https://www.pewresearch.org/short-reads/2024/05/15/a-quarter-of-u-s-teachers-say-ai-tools-do-more-harm-than-good-in-k-12-education/
Lu, Z.-L., & Dosher, B. A. (2022). Current directions in visual perceptual learning. Nature Reviews Psychology, 1(11), 654–668. https://doi.org/10.1038/s44159-022-00107-2
Lucas, G. (Director). (1977). Star wars: A new hope [Film]. 20th Century Fox.
Maoz, Z., & Henderson, E. A. (2013a). World Religion Data (Version 1.1) [Data set]. Correlates of War. https://correlatesofwar.org/data-sets/world-religion-data/
Maoz, Z. & Henderson, E. A. (2013b). The world religion dataset, 1945-2010: Logic, estimates, and trends. International Interactions, 39(3), 265–291. https://doi.org/10.1080/03050629.2013.782306
Marcelo, P. (2023, May 23). Fake image of Pentagon explosion briefly sends jitters through stock market. Associated Press. https://apnews.com/article/pentagon-explosion-misinformation-stock-market-ai-96f534c790872fde67012ee81b5ed6a4
Margolis, E., & Zunjarwad, R. (2018). Visual research. In N. K. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (pp. 600–626). Sage.
Martin, J. B. (2020). Copyright policies and general guidelines (Command and General Staff College Bulletin 918). https://carlcgsc.libguides.com/ld.php?content_id=47593935
Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43–52. https://doi.org/10.1207/S15326985EP3801_6
Medina, J. (2009). Brain rules: 12 principles for surviving and thriving at work, home, and school. Pear Press.
Metros, S. E. (2008). The educator’s role in preparing visually literate learners. Theory into Practice, 47(2), 102–109. https://doi.org/10.1080/00405840801992264
Mills, W., & Heck, T. (2023, August 11). War books: A major war fiction reading list. Modern War Institute. https://mwi.westpoint.edu/war-books-preparing-great-power-competition-fiction-reading-list/
Moore, K. (1999, July 29). How real is visual thinking? Architect’s Journal. https://www.architectsjournal.co.uk/archive/how-real-is-visual-thinking
Murray, M. D. (2023). Generative AI art: Copyright infringement and fair use. SMU Science and Technology Law Review, 26(2), 259–315. https://doi.org/10.25172/smustlr.26.2.4
Murray, N. (2014, July 19). More dissent needed: Critical thinking and PME. War on the Rocks. https://warontherocks.com/2014/07/more-dissent-needed-critical-thinking-and-pme/
Nguyen, J. K. (2024). Human bias in AI models? Anchoring effects and mitigation strategies in large language models. Journal of Behavioral and Experimental Finance, 43, 1–8. https://doi.org/10.1016/j.jbef.2024.100971
Obeta, S. (2024, April 3). From lecture halls to pixels: Exploring the role of AI in educational image generation. LinkedIn Pulse. https://www.linkedin.com/pulse/from-lecture-halls-pixels-exploring-role-ai-image-generation-obeta-zm7bf
Oktaviana, A. A., Joannes-Boyau, R., Hakim, B., Burhan, B., Sardi, R., Adhityatama, S., Hamrullah, Sumantri, I., Tang, M., Lebe, R., Ilyas, I., Abbas, A., Jusdi, A., Mahardian, D. E., Noerwidi, S., Ririmasse, M. N. R., Mahmud, I., Duli, A., Aksa, L. M., McGahan, D., Setiawan, P., … Aubert, M. (2024). Narrative cave art in Indonesia by 51,200 years ago. Nature, 631, 814–818. https://doi.org/10.1038/s41586-024-07541-7
O’Toole, G. (2022, July 22). A picture is worth ten thousand words. Quote Investigator. https://quoteinvestigator.com/2022/07/22/picture-words/
Park, B., Flowerday, T., & Brunken, R. (2015). Cognitive and affective effects of seductive details in multimedia learning. Computers in Human Behavior, 44, 267–278. https://doi.org/10.1016/j.chb.2014.10.061
Pierson, D. (2017). Reengineering Army education for adult learners. Journal of Military Learning, 1(2), 31–43. https://www.armyupress.army.mil/Journals/Journal-of-Military-Learning/Journal-of-Military-Learning-Archives/October-2017-Edition/Pierson-Reengineering-Army-Education/
Piltch, A. (2024, January 6). Mickey Mouse and Darth Vader smoking pot: AI image generators play fast and loose with copyrighted characters. Tom’s Hardware. https://www.tomshardware.com/tech-industry/artificial-intelligence/ai-image-generators-output-copyrighted-characters
Poplin, J., (2024). Memes, myth and meaning in 21st century Chinese visual culture. Palgrave Macmillan.
Potter, M. C. (2014). Detecting and remembering briefly presented pictures. In K. Kveraga & M. Bar (Eds.), Scene vision: Making sense of what we see (pp. 177–197). MIT Press.
Prosser, J. D. (2011). Visual methodology: Toward a more seeing research. In N. K. Denzin & Y. S. Lincoln (Eds.), Collecting and interpreting qualitative materials (pp. 177–211). Sage.
Read, J. D., & Barnsley, R. H. (1977). Remember Dick and Jane? Memory for elementary school readers. Canadian Journal of Behavioral Science, 9(4), 361–370. https://psycnet.apa.org/doi/10.1037/h0081641
Rey, G. D. (2011). Seductive details in multimedia messages. Journal of Educational Multimedia and Hypermedia, 20(3), 283–314. http://learntechlib.org/primary/p/36221/
Ricks, T. E. (2014, March 4). Want to reform military education? An easy 1st step would be banning PowerPoint. Foreign Policy. https://foreignpolicy.com/2014/03/04/want-to-reform-military-education-an-easy-1st-step-would-be-banning-powerpoint/
Risner, R., & Ward, T. E., II. (2004). Concrete experiences and practical exercises: Interventions to create a context for a synergistic learning environment. Command and General Staff College. https://higherlogicdownload.s3.amazonaws.com/EVAL/e7137a06-d1d5-4ab6-8bd0-d1818afdb6ab/UploadedImages/Uploaded%20Documents/Concrete%20Experiences%20&%20Practical%20Exercises%20v3a.pdf
Roam, D. (2013). Back of the napkin: Solving problems and selling ideas with pictures. Penguin.
Roberts, D. (2018). “The message is the medium”: Evaluating the use of visual images to provoke engagement and active learning in politics and international relations lectures. Politics, 38(2), 232–249. https://doi.org/10.1177/0263395717717229
Roberts, D. (2019). Higher education lectures: From passive to active learning via imagery? Active Learning in Higher Education, 20(1), 63–77. https://doi.org/10.1177/1469787417731198
Roediger, H. L. (1990). Implicit memory: Retention without remembering. American Psychologist, 5(9), 1043–1056. https://doi.org/10.1037/0003-066X.45.9.1043
Rubman, J. (2024, March 6). Supporting learning with AI-generated images: A research-backed guide. MIT Management Sloan Teaching and Learning Technologies Blog. https://mitsloanedtech.mit.edu/2024/03/06/supporting-learning-with-ai-generated-images-a-research-backed-guide/
Safire, W. (1996, April 7). On language; Worth a thousand words. New York Times Magazine. https://www.nytimes.com/1996/04/07/magazine/on-language-worth-a-thousand-words.html
Samaniego, M. (2022, November 29). Professor shows Subway Surfers gameplay videos during lecture. The MQ. https://themq.org/2022/11/articles/news/professor-shows-subway-surfers-gameplay-videos-during-lecture/
Schneider, S., Nevel, S., Beege, M., & Rey, G. D. (2020). The retrieval-enhancing effects of decorative pictures as memory cues in multimedia learning videos and subsequent performance tests. Journal of Educational Psychology, 112(6), 1111–1127. http://dx.doi.org/10.1037/edu0000432
Schwartz, P. (1996). The art of the long view: Planning for the future in an uncertain world. Doubleday.
Shipps, A. (2024, June 17). Understanding the visual knowledge of language models. MIT News. https://news.mit.edu/2024/understanding-visual-knowledge-language-models-0617
Smith, P. L., Goodmon, L. B., Howard, J. R., Hancock, R., Hartzell, K. A., & Hillbert, S. E. (2021). Graphic novelisation effects on recognition abilities in studies with dyslexia. Journal of Graphic Novels and Comics, 12(2), 127–144. https://doi.org/10.1080/21504857.2019.1635175
So, J. (2010, June 30). Napoleon’s hair sold for $13,000. CBS News. https://www.cbsnews.com/news/napoleons-hair-sold-for-13000/
Strasser, J., & Seplocha, H. (2007). Using picture books to support young children’s literacy. Childhood Education, 83(4), 219–224. https://doi.org/10.1080/00094056.2007.10522916
Sullivan, W. (2024, July 5). Indonesian cave painting is oldest-known visual storytelling. Smithsonian Magazine. https://www.smithsonianmag.com/smart-news/indonesian-cave-painting-is-oldest-known-visual-storytelling-180984660/
Tetlock, P. E., & Gardner, D. (2015). Superforecasting: The art and science of prediction. Crown.
Thompson, D. S. (2019). Teaching students to critically read digital images: A visual literacy approach using the DIG method. Journal of Visual Literacy, 38(1-2), 110–119. https://doi.org/10.1080/1051144X.2018.1564604
Thompson, S. A. (2024, January 25). We asked A.I. to create the Joker. It generated a copyrighted image. New York Times. https://www.nytimes.com/interactive/2024/01/25/business/ai-image-generators-openai-microsoft-midjourney-copyright.html
UNC University Libraries. (n.d.). Can I use images from Google in a presentation for a class lecture? Retrieved 23 July 2025 from https://asklib.hsl.unc.edu/faq/193437
Vartiainen, H., & Tedre, M. (2023). Using artificial intelligence in craft education: Crafting with text-to-image generative models. Digital Creativity, 34(1), 1–21. https://doi.org/10.1080/14626268.2023.2174557
Whitman Cobb, W. (2021). “It’s a Trap!” The pros and mostly “Khans” of science fiction’s influence on the United States Space Force. Space Force Journal. https://spaceforcejournal.org/its-a-trap-the-pros-and-mostly-khans-of-science-fictions-influence-on-the-united-states-space-force/
Witty, P. A. (1944). Some uses of visual aids in the Army. Journal of Educational Sociology, 18(4), 241–249. https://doi.org/10.2307/2262697
Luke M. Herrington is an assistant professor of social science at the School of Advanced Military Studies at Fort Leavenworth, Kansas. He earned his PhD in political science from the University of Kansas, where he taught the history of western civilization while studying international relations and comparative politics. His research focuses on the intersection of politics with beliefs, perceptions, and attitudes, including the relationship between terrorism and religion, and the relationship between information warfare and conspiracy theory. Herrington has also taught at Park University.
Jacob A. Mauslein is an associate professor of social science at the School of Advanced Military Studies at Fort Leavenworth, Kansas. He earned his doctorate in security studies from Kansas State University; his dissertation studied cyberattacks using international relations theory and quantitative methods. Mauslein has been teaching at the graduate level for over a decade in the fields of political science (Oklahoma State University) and intelligence studies (Mercyhurst University). His recent research focuses on security-related issues such as the geographic indicators of terrorist targeting preferences and maritime piracy.
Back to Top