AI Use Cases in National Aeronautics and Space Administration

Introduction

Artificial Intelligence (AI) is playing a pivotal role in advancing the capabilities of the National Aeronautics and Space Administration (NASA), driving innovation across a wide range of space exploration and research initiatives. From testing complex systems and intelligent robotics to data analysis for planetary science and climate research, AI is helping NASA solve some of the most complex challenges in scientific research and space exploration. The following list explores the diverse AI use cases within NASA, highlighting how AI is pushing the boundaries of what is possible in scientific research and helping to unlock new frontiers in our understanding of the universe.

Use Cases

  • 1. AdaStress

    AdaStress is an innovative project that addresses the challenges of testing complex systems, particularly in scenarios where potential faults are rare but critical to safety. Traditional Monte Carlo sampling methods can be computationally intensive and may require an impractically large number of samples to identify these rare faults. Instead, AdaStress employs reinforcement learning techniques to optimize the sampling process, allowing for more efficient identification of low-likelihood yet high-impact faults. This approach enhances the reliability and safety of complex systems without the prohibitive computational costs.

  • 2. Biological and Physical Sciences (BPS) RNA Sequencing Benchmark Training Dataset

    The Biological and Physical Sciences (BPS) RNA Sequencing Benchmark Training Dataset project involves the collection and analysis of RNA sequencing data from both spaceflown and control mouse liver samples, sourced from NASA GeneLab. To enhance the dataset, generative adversarial networks (GANs) are used to create synthetic data points. The project employs classification methods and hierarchical clustering techniques to identify genes that are predictive of specific biological outcomes, contributing to a better understanding of gene behavior in response to spaceflight conditions.

  • 3. Biological and Physical Sciences Microscopy Benchmark Dataset

    The Biological and Physical Sciences Microscopy Benchmark Dataset project utilizes fluorescence microscopy images sourced from the Biological and Physical Sciences Open Science Data Repositories. This extensive dataset includes 93,488 images of individual nuclei from mouse fibroblast cells that have been irradiated with iron particles or X-rays, with DNA double strand breaks labeled using the 53BP1 fluorescence marker. The images reveal DNA damage as small white foci. This study simulates the effects of space radiation, and the dataset has been prepared for AI applications, allowing researchers to test various AI tools. The dataset is publicly accessible on the Registry of Open Data on AWS, along with in-house developed AI tools for analysis.

  • 4. High-Performance Quantum-Classical Hybrid Deep Generative Modeling Parameterized by Energy-based Models for Flight-Operations Anomaly Detection

    The High-Performance Quantum-Classical Hybrid Deep Generative Modeling project focuses on developing scalable and explainable machine learning techniques for detecting anomalies in flight operations. By integrating classical computing methods, which enhance performance and reduce costs, with quantum computing capabilities that encode quantum correlations, the project aims to improve anomaly detection accuracy. The deep learning model analyzes time series data from 19 flight metrics collected by commercial aircraft flight recorders, predicting operational and safety-related anomalies specifically during take-off and landing phases, thereby enhancing flight safety and operational efficiency.

  • 5. Prediction of Mass Level in Radio Frequency Cryogenics

    The Prediction of Mass Level in Radio Frequency Cryogenics project employs machine learning to predict fluid levels in tanks by analyzing the radio frequency signatures of the fluids. This approach is particularly valuable in microgravity environments, where traditional fluid level detection methods are ineffective due to the lack of defined shapes for the fluids. By leveraging radio frequency data, the model provides accurate predictions of fluid levels, which is crucial for various space applications and experiments.

  • 6. Pre-trained microscopy image neural network Encoders

    The Pre-trained Microscopy Image Neural Network Encoders project involves training convolutional neural network (CNN) encoders on over 100,000 microscopy images of various materials. These encoders, when applied to downstream microscopy tasks through transfer learning, demonstrate superior performance compared to traditional ImageNet encoders. The pre-trained MicroNet encoders have been effectively utilized for tasks such as semantic segmentation, instance segmentation, and regression. Ongoing efforts aim to extend their application to generative tasks and 3D texture synthesis. This technology has been instrumental in quantifying the microstructure of materials, including SLS core stage welds and Ni-based superalloys, facilitating a deeper understanding of the relationship between material processing, microstructure, and properties. By automating the analysis of microstructure from microscopy images, this approach significantly accelerates the design and development of new materials.

  • 7. Inverse Design of Materials

    The Inverse Design of Materials project aims to revolutionize the process of discovering new materials, which traditionally involves lengthy timelines of ten to twenty years for development and testing. This initiative focuses on enabling rapid discovery, optimization, qualification, and deployment of materials tailored for specific applications. By training supervised machine learning models to understand the relationship between material processing and performance, the project employs Bayesian optimization to identify the most effective experimental approaches. This methodology significantly reduces the time and cost associated with traditional experimental designs. Currently, the project is being applied to improve the quality of SLS core stage welds and will also support the development of better insulating materials for electrified aircraft through a fully autonomous robotic lab. The outputs include optimized recipes and methodologies for new materials, achieving a fourfold acceleration in the materials discovery lifecycle and potentially increasing throughput by ten times through parallel experimentation.

  • 8. Titan Methane Cloud Detection (GSFC Planetary Sciences Lab)

    The Titan Methane Cloud Detection project utilizes machine learning techniques to analyze imagery captured by the Cassini space probe, focusing on detecting and characterizing methane clouds on Saturn’s moon Titan. This research is crucial for understanding Titan’s atmospheric composition and potential for supporting life, as methane clouds can indicate geological and chemical processes occurring on the moon

  • 9. ASPEN Mission Planner

    The ASPEN Mission Planner is an advanced, AI-driven application framework designed to support a diverse range of planning and scheduling applications in space missions. Its modular and reconfigurable architecture includes reusable software components such as a modeling language, resource management system, temporal reasoning capabilities, and a user-friendly graphical interface. ASPEN has been successfully utilized in various missions, including the Modified Antarctic Mapping Mission, Orbital Express, Earth Observing One, and ESA’s Rosetta Orbiter, demonstrating its versatility and effectiveness in complex mission planning.

  • 10. Autonomous Marine Vehicles (Single, Multiple)

    The Autonomous Marine Vehicles project focuses on developing underwater submersibles capable of operating autonomously in ocean environments to achieve scientific objectives, particularly the study of hydrothermal venting. Hydrothermal vents, which have been identified on Enceladus, are believed to harbor unique ecosystems and may be critical to understanding the origins of life. The project emphasizes autonomous science, enabling the vehicles to localize features of interest with minimal human intervention. A field program at Karasik Seamount in the Arctic Ocean was conducted to explore human-in-the-loop approaches, and subsequent developments included an autonomous nested search method for hydrothermal venting, tested through simulations. The vehicles have been deployed in various locations, including Monterey Bar and Chesapeake Bay, to gather valuable scientific data.

  • 11. CLASP Coverage Planning & Scheduling

    The CLASP (Compressed Large-scale Activity Scheduling and Planning) project serves as a long-range scheduling tool for space-based and aerial instruments modeled as pushbroom sensors. It addresses the challenge of optimizing the orientation and operational timings of these instruments to maximize coverage of target points while managing memory and energy constraints. CLASP utilizes geometric computations through the SPICE ephemeris toolkit to determine observation parameters. This tool enables mission planning teams to simulate the scientific return of a mission based on various operational models, including spacecraft trajectory and downlink strategies. The insights gained from these simulations can inform multiple aspects of mission design, including trajectory planning and spacecraft operations. CLASP is currently employed in several missions, including NISAR, ECOSTRESS, EMIT, and OCO-3, and has been utilized in over 100 mission analyses and studies.

  • 12. Onboard Planner for Mars2020 Rover (Perseverance)

    The Onboard Planner for the Mars 2020 Rover (Perseverance) is designed to incrementally create a feasible schedule for rover activities based on priority. The scheduler calculates valid time intervals for each activity, considering necessary preheating, maintenance, and the rover’s wake/sleep cycles. Once an activity is scheduled, it is not reconsidered for deletion or rescheduling, making the system non-backtracking. To address potential brittleness in this approach, the Copilot systems conduct Monte Carlo-based stochastic analyses to adjust scheduling parameters, including activity priorities and temporal constraints. This project encompasses a broad range of research and engineering efforts aimed at enhancing the autonomy of future rovers, including planning, scheduling, path planning, onboard science operations, image processing, terrain classification, fault diagnosis, and location estimation. The initiative includes hands-on experimentation and demonstrations at JPL’s simulated Mars navigation yard.

  • 13. SensorWeb: Volcano, Flood, Wildfire, and Others

    The SensorWeb project is an innovative initiative that integrates a network of sensors with software and internet connectivity to create an autonomous satellite observation response system. This flexible and modular architecture allows for the expansion of sensor capabilities, customization of trigger conditions, and tailored responses to various environmental phenomena. The system has been successfully implemented for global surveillance of volcanoes and has been tested for monitoring flooding, cryospheric events, and atmospheric conditions. By utilizing low-resolution, high-coverage sensors to trigger observations from high-resolution instruments, the SensorWeb enhances the ability to monitor critical events. This project is currently focused on observing the Earth’s 50 most active volcanoes, as well as conducting experiments related to flooding, wildfires, and cryospheric changes, such as snow and ice dynamics.

  • 14. TRN (Terrain Relative Navigation)

    Terrain Relative Navigation (TRN) is a critical technology used during Mars landings to enhance the safety and accuracy of landing site selection. By automatically matching landmarks identified in descent images to a pre-generated map from orbital imagery, TRN estimates the spacecraft’s position in real-time. This position estimate is essential for selecting a safe and accessible landing site, particularly in regions with significant hazards. TRN was successfully implemented during the Mars 2020 mission landing on February 18, 2021, and it is also planned for use in the upcoming Mars Sample Return Lander mission, further demonstrating its importance in planetary exploration.

  • 15. Autonomous WAiting Room Evaluation (AWARE)

    The Autonomous WAiting Room Evaluation (AWARE) project employs a security camera and the YOLO machine learning model to monitor and count the number of individuals waiting for service at Langley’s Badge & Pass Office. When the number of people waiting exceeds a predefined threshold, the system automatically sends texts and emails to request additional assistance at the service counters. This initiative enhances operational efficiency by ensuring that service areas are adequately staffed during peak times, improving the overall experience for visitors and staff alike.

  • 16. Geophysical Observations Toolkit for Evaluating Coral Health (GOTECH)

    The Geophysical Observations Toolkit for Evaluating Coral Health (GOTECH) project involved three capstone initiatives conducted between 2021 and 2022 in collaboration with Georgia Tech and the University of Rochester. The goal was to develop machine learning models capable of analyzing satellite LIDAR imagery to detect coral reefs and assess their health. Supported by Coral Vita, a non-governmental organization, and the National Institute of Aerospace, the findings from this project were presented at the United Nations COP27, highlighting the importance of technology in coral conservation efforts.

  • 17. Lessons Learned Bot (LLB)

    The Lessons Learned Bot (LLB) is an innovative tool designed to enhance the accessibility of lessons learned documents for NASA users. This near real-time application integrates with Microsoft Excel, allowing users to search for relevant lessons learned content based on the text in selected cells. The LLB utilizes a trained machine learning model and natural language processing (NLP) algorithms to identify and rank relevant records, making it easier for users to find applicable lessons. The installation package includes a pre-trained dataset of NASA’s lessons learned and tools for users to train the model on their own datasets. An API version of the software is also available for integration with other applications within the agency, further facilitating knowledge sharing and learning from past experiences.

  • 18. Pedestrian Safety Corridors for Drone Test Range

    The Pedestrian Safety Corridors for Drone Test Range project at NASA Langley Research Center (LaRC) focuses on enhancing the safety of Unmanned Aerial Systems (UAS) operations in areas where human activity occurs, such as walking and driving zones. By expanding the on-site UAS test range, the project utilizes image recognition technology from a parking advisor system to detect pedestrian traffic. This system provides near-real-time detection of human presence, allowing for a statistical assessment of areas with varying pedestrian density. The inputs for this project include camera signals and hand-labeled training data, while the outputs consist of maps that indicate pedestrian traffic density. The findings from this project have been integrated into the GRASP flight risk simulation tool, improving safety protocols for UAS operations.

  • 19. Airplane detection

    The Airplane Detection project employs deep learning techniques to identify and detect airplanes using high-resolution satellite imagery. This initiative enhances the ability to monitor aircraft activity and movements from space, providing valuable data for various applications, including air traffic management, environmental monitoring, and security assessments.

  • 20. Automatic Detection of Impervious Surfaces from Remotely Sensed Data Using Deep Learning

    The Automatic Detection of Impervious Surfaces project utilizes a deep learning approach based on a U-Net architecture, incorporating VGG-19 as the encoder block along with a custom decoder block. This model is designed to accurately map impervious surfaces using data from Landsat and OpenStreetMap (OSM) patches. The project aims to improve land cover classification and urban planning efforts by providing precise information on impervious surfaces, which are critical for understanding urbanization and its environmental impacts.

  • 21. Deep Learning Approaches for Mapping Surface Water Using Sentinel-1

    The Deep Learning Approaches for Mapping Surface Water project employs a U-Net based architecture to analyze and map surface water using Synthetic Aperture Radar (SAR) images from the Sentinel-1 satellite. This project aims to enhance the accuracy of surface water detection and monitoring, providing essential data for water resource management, flood monitoring, and environmental studies.

  • 22. Deep Learning-based Hurricane Intensity Estimator

    The Deep Learning-based Hurricane Intensity Estimator is a web-based tool designed to provide situational awareness during hurricane events. By utilizing deep learning algorithms to analyze satellite images, the tool objectively estimates hurricane wind speeds, offering critical information for emergency response and disaster management efforts. This technology enhances the ability to monitor and assess hurricane intensity in real-time, improving preparedness and response strategies.

  • 23. Forecasting Algal Blooms With AI in Lake Atitlán

    The Forecasting Algal Blooms With AI in Lake Atitlán project focuses on analyzing satellite image datasets to identify variables that may predict future algal blooms. By applying machine learning techniques, the project aims to uncover the triggers of algal blooms, enabling precise preventative actions not only in Lake Atitlán but also in other freshwater bodies across Central and South America. This research is vital for protecting water quality and aquatic ecosystems from the harmful effects of algal blooms.

  • 24. GCMD Keyword Recommender (GKR)

    The GCMD Keyword Recommender (GKR) is a tool that utilizes natural language processing (NLP) techniques to suggest relevant science keywords for research and data discovery. This tool enhances the ability of researchers and scientists to find and categorize scientific data effectively, improving the accessibility and usability of scientific information across various disciplines.

  • 25. ImageLabeler

    The ImageLabeler is a web-based collaborative tool designed for generating training data for machine learning applications. This platform allows users to collaboratively label images, facilitating the creation of high-quality training datasets for various machine learning models. By streamlining the data labeling process, ImageLabeler enhances the efficiency of developing and training machine learning algorithms, supporting a wide range of applications in image recognition and analysis.

  • 26. Mapping sugarcane in Thailand using transfer learning, a lightweight convolutional neural network, NICFI high resolution satellite imagery and Google Earth Engine

    The project on mapping sugarcane in Thailand employs a U-Net based architecture integrated with a MobileNetV2 encoder, utilizing transfer learning from a global model. This approach leverages high-resolution satellite imagery from the NICFI (Norwegian International Climate and Forest Initiative) mosaic for training purposes. The goal is to accurately identify and map sugarcane pixels, contributing to agricultural monitoring and management efforts in the region.

  • 27. Predicting streamflow with deep learning

    The Predicting Streamflow project utilizes a long short-term memory (LSTM) model to forecast streamflow at United States Geological Survey (USGS) gauge sites. The model incorporates data from the NASA Land Information System along with precipitation forecasts to enhance the accuracy of streamflow predictions. This initiative aims to improve water resource management and flood forecasting capabilities.

  • 28. Ship detection

    The Ship Detection project employs deep learning techniques to identify and detect ships using high-resolution satellite imagery. This capability is essential for maritime monitoring, security, and environmental assessments, providing valuable data for various applications, including shipping traffic analysis and illegal fishing detection.

  • 29. Similarity Search for Earth Science Image Archive

    The Similarity Search for Earth Science Image Archive project utilizes a self-supervised learning approach to enable efficient searching of image archives based on a query image. This method enhances the ability to retrieve relevant images from vast datasets, facilitating research and analysis in Earth sciences by improving access to pertinent visual data.

Conclusion

The diverse AI use cases within NASA showcase the transformative impact of advanced technologies on space exploration, scientific research, and operational efficiency. From enhancing safety and reliability through projects like AdaStress and Terrain Relative Navigation to accelerating material discovery with Inverse Design of Materials, AI is at the forefront of innovation in the aerospace sector. These initiatives not only improve mission outcomes but also open new frontiers in our understanding of the universe and our planet. As NASA continues to harness the power of AI, we can expect even greater advancements in space exploration and the development of cutting-edge technologies that benefit both space missions and life on Earth.

Discuss a Use Case

Fill in your details & we will get back to you shortly.