September 2024 Online Exclusive Article

AI-HyperCal

In-Scene Hyperspectral Imagery Calibration Using Artificial Intelligence Known-Point Identification

 

Capt. Chelsey Sturtevant, U.S. Air Force

 

Download the PDF Download the PDF

 
An MQ-9 Reaper sits on the 361th Expeditionary Attack Squadron flight line at an undisclosed location, 6 August 2022

It is late 2016, and Eastern Aleppo fell to Russian-backed Syrian forces, North Korea started ramping up nuclear tests again, and the year of the highest number of remotely piloted aircraft (RPA) strikes came to an end. An uptick in airborne intelligence, surveillance, and reconnaissance (ISR) came from all those flight hours, resulting in overflowing hard drives containing millions of hours of video and countless high-resolution still images uploaded to the U.S. Air Force Distributed Common Ground System.1 All of this data, however, is meaningless without being processed, exploited, and distributed (PED) by imagery analysts. PED ensures that imagery is of high quality and that objects of interest are characterized for location, material, size, and context. Time with the surrounding institutional schedules, location of other objects, and personnel movements can all impact the final intelligence assessment.

Having ramped up airborne ISR more than five times over three years, imagery analysts could not keep up with this new data crunch. Each collection may take several analysts an entire day to review and may include several types of intelligence, such as imagery, electronic signals, or other information. This enormous data store became the perfect training environment for a new artificial intelligence (AI) project called Project Maven.2

Project Maven evolved from Google’s partnership with the Pentagon to use its AI technology and neural net backbone to analyze RPA footage to flag images for further review by imagery analysts. Although Google stepped away from the partnership, the Department of Defense has continued with the National Geospatial Agency, which took over in 2022.3 New applications include automatic search and detection for various objects, integration with other machine learning platforms, large language models, and data labeling to enable searching a database of imagery intelligence to locate objects and files using plain English. With the integration of these capabilities, Project Maven targeting algorithms were used in February 2024 to help identify air-strike targets in Iraq and Syria.4

Integrating AI into operational intelligence collection can pose a risk to established processes. Frequently promising AI projects tend to stagnate in the “valley of death,” creating hesitancy for organizations to embark on development that sucks much-needed resources away from more conservative proposals.5 Deep learning implementations require huge troves of data that can frequently be difficult to acquire and more costly to tag with relevant metadata. The resulting AI frequently lacks transparency, leaving the users with a “black box” that may give correct results but lacks agility for new applications without retraining and requiring quality control and supervision for any mission-essential applications because of the lack of transparency.

The evolution of Project Maven is a great example of how different types of AI can be successfully implemented for military applications. Google’s initial imagery backbone was trained mainly by Google users sorting and labeling images—think of those annoying CAPTCHA questions to “prove you are not a robot” by asking how many crosswalks, stoplights, and motorcycles you can click on.6 Project Maven can integrate this technique by having imagery analysts select vehicles, weapons, and missile sites—this technique is called “human in the loop,” where a trained technician supervises the machine learning.7 Progressing through “human in the loop,” the system eventually begins to make its own guesses, and the human now provides feedback on whether it was correct. Once the system reaches a high enough proficiency, humans can be removed from most of the analysis and only be brought in for spot checks. In this way, the learning goes from a human-initiated loop to a machine-initiated loop, accounting for most of the “processing” part of the PED cycle.

Once the initial labeling of these objects is complete and built into a database, intelligence analysts can be consulted to exploit the information to determine which pieces of context (e.g., proximity, time, and personnel) are important to the “exploitation” part of the PED cycle. As confidence in the system grows, there are two options: the machine learning algorithm could be set to now identify a pattern of these searches automatically (machine-initiated); or the analysts can be consulted, and the types of searches could be developed explicitly (human-initiated). As different pieces of analysis can be identified, they can also be addressed while confidence in the system is high.

The ability of Project Maven to identify objects on the ground presents another exciting opportunity for remote sensing: analyzing massive amounts of data contained in hyperspectral imagery (HSI). The key to imagery analysis for Project Maven is computer vision (see figure 1)—each pixel in an image is a combination of red, green, and blue. Depending on how bright each of these three colors is, and if it has seen this specific pattern of colors, the computer can determine if it resembles any of the objects it has seen before. Images need to be of similar quality to the ones that AI has been trained on. HSI can contain not three colors but anywhere from twenty to hundreds of colors ranging from far infrared to ultraviolet. This big data is complicated but can enable analysts to identify the actual chemical composition of the image beyond what is visible to the human eye. Widespread use of HSI in airborne ISR has so far been prevented by logistical issues, including system size required for high altitude, large amounts of data to be transmitted, atmospheric variability, power requirements on airborne platforms, and cost, in addition to the amount of final analysis required to process the data. AI and short-distance ground applications can significantly assist with many of these challenges.

img2

Motivating this effort are many of the most complex remote sensing targets that can be cracked using HSI, such as separating camouflage and decoys, identifying invisible chemical plumes, sighting disturbed earth, finding crash debris, revealing submerged coral reefs, and many other targets that are readily discernable by standard visible or infrared imagery.8 Large-power, large-footprint systems in aircraft and vehicles can support remote sensing for the assessment of soil type/condition, snowpack data/avalanche risk, detection of movement, and improvised explosive device detection.9 Smaller systems sacrifice some of the standoff capabilities, but the same HSI systems being integrated for PackBot military robot navigation or smaller (group 2-3) unmanned aircraft can still inform the Next-Generation NATO Reference Mobility Model and the Unified Soil Classification System assessments for larger vehicles traveling behind.10 Sections of aircraft or rotorcraft wreckage can easily blend in with surrounding vegetation, dry grass, and soil when an area is searched with the human eye, whereas, HSI can search for a chemical signature of peaks at 450–500 nm to cue civilian search and rescue to the location of debris (see figure 2).11 Where camouflage and leafy surroundings appear green in full-color imagery (700 nm wavelength), HSI can distinguish between the reflectance in the wavelength range from 1000 to 2000 nm.12 HSI can even be used quantitatively to determine the source of a gas and the amount released.13

img3

As system sizes and power requirements go down and low-earth orbit data links via systems like StarLink increase the amount of data sent, we may very well be reaching a time where widespread HSI becomes a feasible remote-sensing option from overhead systems and machine vision applications for terrestrial and maritime manned and unmanned platforms. The next easy kill for improving adoption will be to use AI to better PED the large files (either on-site with the sensor or after dissemination to imagery analysts).

The first correction to make for processing would be correcting for illumination variability (e.g., time of day, season, and angle of image), blockages (e.g., sand, dust, smoke, and clouds), and atmospheric factors (e.g., humidity and air pressure). Even slight weather patterns pushing through the air may result in a “darker” image and prevent identification of the chemicals, as many of the different color layers are blocked by water in the air.14 The image can be artificially brightened to compensate if there is a known object in the scene that can be used for calibration.15

This is where Project Maven can help identify locations with known chemical material (e.g., an aluminum water tower or a nearby lake) and consider other data like weather station observations. By tracking the location of useful objects and other corrections baked into the system (camera response at different temperatures, amount of sunlight at various times of day, etc.), incorporating AI could significantly reduce the use of valuable analyst time and energy and decrease the risk of calibration errors.

Similar to the Project Maven integration, HSI training could be implemented as another layer of a “machine/human in the loop” type integration where the AI provides an initial guess, and the analyst accepts or corrects the AI inputs before the next phase. This way, the AI training will be performed by existing manpower, and the AI will decrease the amount of input required from the analyst as its guess improves. As the virtuous cycle continues, the analyst input decreases to only quality control and develops an AI that can be tested and evaluated continually until it meets a sufficient level of quality. The lack of available, skilled imagery analysts has prevented this implementation in the general study because academic and industrial studies do not have the volume of trained specialists at hand. However, with Project Maven becoming a program of record this year, they may be able to provide the needed feedback.

Soldiers assigned to Charlie Company, 2nd Battalion, 35th Infantry Regiment, 3rd Infantry Brigade Combat Team, 25th Infantry Division, train on Battle Drill 6, Enter and Clear a Room, on 10 February 2021 at Schofield Barracks, Hawaii

This transparency prevents the “black box” effect so common in AI implementation, where the raw imagery goes in and comes out looking better, but it is never understood why the AI made those specific changes. Interrelated factors can be tracked as a separate term that integrates weight from both elements—if the optics factors are also dependent on outdoor air temperature, a portion of the correction will contain weight from both factors. Further technical information on actual implementation is being researched by several reliable sources, including analysis of the resulting images to better identify groups of chemicals that span multiple pixels and mixtures of chemicals.16 Given the success of Project Maven, the Distributed Common Ground System open architecture, and the U.S. Air Force DevOps implementation, the U.S. Department of Defense is continuing to expand to other AI projects and enable many technological solutions that have previously been unachievable.17

While these transitions can be a heavy initial lift for an established institution, the ability to continue to iterate and make improvements at the speed of the fight brings the technology we need today to the warfighter to give real feedback rather than being bogged in a testing loop. With careful integration, Project Maven and other projects have shown that machine learning and AI can be adopted into operations on a supervised basis when supported by a stable concept of operations to enhance agile operations and bring in capabilities that had previously seemed like science fiction. Tomorrow’s fight is coming, will we be armed to fight on the AI battlefield?

The views expressed are those of the author and do not reflect the official guidance or position of the U.S. government, the Department of Defense, the U.S. Air Force, or the U.S. Space Force.

 


Notes External Disclaimer

  1. James Lawrence, “Why the Traditional AOC and AF DCGS Must Modernize in the New Age of Warfare,” Modern Battlespace, 7 February 2024, https://modernbattlespace.com/2024/02/07/why-the-traditional-aoc-and-af-dcgs-must-modernize-in-the-new-age-of-warfare/.
  2. Cheryl Pellerin, “Project Maven to Deploy Computer Algorithms to War Zone by Year’s End,” U.S. Department of Defense News, 21 July 2017, https://www.defense.gov/News/News-Stories/Article/article/1254719.
  3. Jaspreet Gill, “NGA Making ‘Significant Advances’ Months into AI-Focused Project Maven Takeover,” Breaking Defense, 24 May 2023, https://breakingdefense.com/2023/05/nga-making-significant-advances-months-into-ai-focused-project-maven-takeover.
  4. Katrina Manson, “US Used AI to Help Find Middle East Targets for Airstrikes,” Bloomberg News, 26 February 2024, https://www.bloomberg.com/news/articles/2024-02-26/us-says-it-used-ai-to-help-find-targets-it-hit-in-iraq-syria-and-yemen.
  5. Mohana Ravindranath, “Watchdog: Too Many DARPA Projects Enter ‘Valley of Death,’ Don’t Progress,” Nextgov/FCW, 25 November 2015, http://www.nextgov.com/emergingtech/2015/11/watchdog-darpa-needs-improve-tech-transition-tracking/124011; Chris Cornillie, “Can Pentagon Bridge Artificial Intelligence’s ‘Valley of Death?,’” Bloomberg Government, 14 September 2018, https://about.bgov.com/news/can-pentagon-bridge-artificial-intelligences-valley-of-death.
  6. Rugare Maruzani, “Are You Unwittingly Helping to Train Google’s AI Models?,” Medium, 26 January 2021, https://towardsdatascience.com/are-you-unwittingly-helping-to-train-googles-ai-models-f318dea53aee.
  7. Chenhao Tan, “Human-Centered Machine Learning: A Machine-in-the-loop Approach,” Medium, 15 February 2018, https://medium.com/@ChenhaoTan/human-centered-machine-learning-a-machine-in-the-loop-approach-ed024db34fe7.
  8. Michal Shimoni, Rob Haelterman, and Christiaan Perneel, “Hyperspectral Imaging for Military and Security Applications: Combining Myriad Processing and Sensing Techniques,” IEEE Geoscience and Remote Sensing Magazine 7, no. 2 (June 2019): 101–17, https://doi.org/10.1109/MGRS.2019.2902525.
  9. Huan Yu et al., “Hyperspectral Remote Sensing Applications in Soil: A Review,” in Hyperspectral Remote Sensing: Theory and Applications, ed. Prem Chandra Pandey et al. (Amsterdam: Elsevier, 2020), 269–91, https://doi.org/10.1016/B978-0-08-102894-0.00011-5; Yanyi Li et al., “Adoption of Machine Learning in Intelligent Terrain Classification of Hyperspectral Remote Sensing Images,” Computational Intelligence and Neuroscience 2020 (2020): Article 8886932, https://doi.org/10.1155/2020/8886932; Shubham Awasthi and Divyesh Varade, “Recent Advances in the Remote Sensing of Alpine Snow: A Review,” GIScience & Remote Sensing 58, no. 6 (2021): 852–88, https://doi.org/10.1080/15481603.2021.1946938; Sicong Liu et al., “A Review of Change Detection in Multitemporal Hyperspectral Images: Current Techniques, Applications, and Challenges,” IEEE Geoscience and Remote Sensing Magazine 7, no. 2 (2019): 140–58, https://doi.org/10.1109/MGRS.2019.2898520; Ayesha Shafique et al., “Deep Learning-Based Change Detection in Remote Sensing Images: A Review,” Remote Sensing 14, no. 4 (2022): 871, https://doi.org/10.3390/rs14040871.
  10. Jordan Ewing et al., “Remote Sensing of Terrain Strength for Mobility Modeling & Simulation,” Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium (2020): 1–8, http://gvsets.ndia-mich.org/publication.php?documentID=804; Jordan Ewing et al., “Utilizing Hyperspectral Remote Sensing for Soil Gradation,” Remote Sensing 12, no. 20 (2020): 3312, https://doi.org/10.3390/rs12203312; Kacper Jakubczyk et al., “Hyperspectral Imaging for Mobile Robot Navigation,” Sensors 23, no. 1 (2023): 383, https://doi.org/10.3390/s23010383.
  11. Michael T. Eismann, Alan D. Stocker, and Nasser M. Nasrabadi, “Automated Hyperspectral Cueing for Civilian Search and Rescue,” Proceedings of the IEEE 97, no. 6 (2009): 1031–55, https://doi.org/10.1109/JPROC.2009.2013561.
  12. Shimoni, Haelterman, and Perneel, “Hyperspectral Imaging for Military and Security Applications.”
  13. M. A. Rodríguez-Conejo and Juan Meléndez, “Hyperspectral Quantitative Imaging of Gas Sources in the Mid-Infrared,” Applied Optics 54, no. 2 (2015): 141–49, https://doi.org/10.1364/AO.54.000141.
  14. Gerrit Polder and Gerie W. A. M. van der Heijden, “Visualization of Spectral Images,” Proceedings: Visualization and Optimization Techniques 4553 (2001): 132–37, https://doi.org/10.1117/12.441578.
  15. Daniel C. Harris, “Fundamentals of Spectrophotometry,” in Quantitative Chemical Analysis, 7th ed. (New York: W. H. Freeman, 2007), 380–84.
  16. Alexander F. H. Goetz, “Three Decades of Hyperspectral Remote Sensing of the Earth: A Personal View,” Remote Sensing of Environment 113, no. S1 (September 2009): S5–16, https://doi.org/10.1016/j.rse.2007.12.014; M. E. Paoletti et al., “Deep Learning Classifiers for Hyperspectral Imaging: A Review,” ISPRS Journal of Photogrammetry and Remote Sensing 158 (December 2019): 279–317, https://doi.org/10.1016/j.isprsjprs.2019.09.006; Richard J. Murphy, Sven Schneider, and Sildomar T. Monteiro, “Consistency of Measurements of Wavelength Position from Hyperspectral Imagery: Use of the Ferric Iron Crystal Field Absorption at 900 nm as an Indicator of Mineralogy,” IEEE Transactions on Geoscience and Remote Sensing 52, no. 5 (May 2014): 2843–57, https://doi.org/10.1109/TGRS.2013.2266672.
  17. Nicole Blake Johnson, “The Air Force Is Implementing DevOps, Here’s What It Means for Airmen,” GovLoop, 30 August 2019, https://www.govloop.com/the-air-force-is-implementing-devops-heres-what-it-means-for-airmen; Ireland Degges, “How Project Maven & Other Key Efforts Are Driving AI Success Across the US Government,” GovCon Wire, 14 March 2024, https://www.govconwire.com/2024/03/how-project-maven-and-other-key-efforts-are-driving-ai-success-across-the-us-government.

 

Capt. Chelsey Sturtevant, U.S. Air Force, is a pilot at the 867th Attack Squadron at Creech Air Force Base, Nevada. She holds an MS in chemistry from Colorado State University with experience in laser spectroscopy and optics manufacturing.

 

Back to Top