AI Use Cases in Department of Commerce

Introduction

In the evolving landscape of commerce, artificial intelligence (AI) is changing how businesses interact with customers, manage operations, and stay competitive. It is rapidly reshaping the commerce industry by enhancing efficiency in decision-making across various functions. According to IBM, “By implementing effective solutions for AI in commerce, brands can create seamless, personalized buying experiences that increase customer loyalty, customer engagement, retention and share of wallet across B2B and B2C channels.” From automating customer service to optimizing trade compliance and market expansion, AI is providing solutions that are not only more efficient but also more precise and adaptable to the ever-changing market demands. By harnessing AI, companies can streamline their operations, improve customer experiences, and make informed decisions based on real-time data and advanced analytics. In this section, we explore various AI use cases in the commerce industry that highlight its transformative potential and practical applications.

Use Cases

  • 1. Chatbot for International Trade Admin

    The Chatbot for International Trade Administration (ITA) is integrated into trade.gov to help ITA clients with frequently asked questions, finding information, and recommending events and services. Clients interact with the chatbot by asking questions or responding to prompts. The chatbot searches through ITA content libraries and staff inputs to provide answers and suggestions tailored to the client’s profile, whether they are an exporter, foreign buyer, or investor. This tool enhances user experience by providing quick and personalized assistance.

  • 2. Consolidated Screening List

    The Consolidated Screening List (CSL) is a comprehensive list of entities for which the U.S. Government imposes restrictions on certain exports, reexports, or transfers. It merges 13 export screening lists from the Departments of Commerce, State, and Treasury. The CSL search engine features “Fuzzy Name Search” capabilities, enabling searches without needing the exact spelling of an entity’s name. In this mode, the CSL provides a “score” for results that closely match the searched name, which is especially useful for names translated from non-Latin alphabets. This tool aids in ensuring compliance with export regulations by simplifying the search process.

  • 3. Market Diversification Toolkit

    The Market Diversification Toolkit helps users identify new potential export markets by analyzing current trade patterns. Users input the products they manufacture and the markets they currently export to. The tool uses a machine learning algorithm to suggest and compare new markets worth considering. It combines product-specific trade and tariff data with macroeconomic and governance data to provide a comprehensive view of suitable markets for further research. Users can filter the results to focus on specific markets and adjust the weight of eleven different indicators that contribute to a country’s overall score. Additionally, all data can be exported to a spreadsheet for detailed analysis.

  • 4. Fisheries Electronic Monitoring Image Library

    The Fisheries Electronic Monitoring Library (FEML) serves as the central repository for electronic monitoring data concerning marine life. This library collects and stores data from various electronic monitoring systems, providing a comprehensive resource for researchers and policymakers. The FEML aims to support the conservation and management of marine resources by offering accessible and organized data on marine life.

  • 5. Developing automation to determine species and count using optical survey data in the Gulf of Mexico

    The project, known as VIAME, focuses on using optical survey data collected in the Gulf of Mexico to automate species identification and counting. The project has three main objectives: developing an image library of landed catch, creating automated image processing techniques using Machine Learning and Deep Learning to identify and count species from underwater images, and developing algorithms to process these images in near real-time and upload the information to a central database. This automation aims to enhance the accuracy and efficiency of marine species monitoring and data collection.

  • 6. Fast tracking the use of VIAME for automated identification of reef fish

    The project aims to expedite the use of VIAME for the automated identification of reef fish. Image libraries are being compiled to develop detection and classification models that automate the annotation process for the SEAMAP Reef Fish Video survey in the Gulf of Mexico. While the work is primarily conducted in VIAME, other potential methods are also being explored to identify the best-performing models. The current status indicates that the models are performing sufficiently well to incorporate automated analysis into video readings this spring, as part of a supervised annotation quality assurance/quality control process.

  • 7. A Hybrid Statistical-Dynamical System for the Seamless Prediction of Daily Extremes and Subseasonal to Seasonal Climate Variability

    The project aims to demonstrate the effectiveness and operational suitability of a hybrid statistical-dynamical prediction system that provides seamless probabilistic forecasts of daily extremes and subseasonal-to-seasonal temperature and precipitation. A Bayesian statistical method was recently demonstrated for post-processing seasonal forecasts of mean temperature and precipitation from the North American Multi-Model Ensemble (NMME). The current goal is to test an updated hybrid system that enables seamless sub-seasonal and seasonal forecasting, with a particular focus on representing daily extremes in line with climate conditions. The project also explores the use of machine learning to enhance forecasting accuracy and reliability.

  • 8. Coastal Change Analysis Program (C-CAP)

    The Coastal Change Analysis Program (C-CAP) began a high-resolution land cover development effort in 2015, using geographic object-based image analysis and machine learning algorithms like Random Forest to classify coastal land cover from 1-meter multispectral imagery. Recently, C-CAP has adopted Convolutional Neural Networks (CNN) to derive the impervious surface component of their land cover products. Most of this work is done through external contracts. Before this high-resolution effort, C-CAP focused on moderate resolution multi-date land cover development using Landsat imagery for the coastal U.S. Since 2002, C-CAP has used Classification and Regression Trees for land cover data development. This program aims to provide detailed and accurate land cover data to support coastal management and conservation efforts.

  • 9. Deep learning algorithms to automate right whale photo id

    This project uses deep learning algorithms to automate the identification of right whales from photographs. Initially started as a Kaggle competition, it has expanded to include multiple algorithms that match right whales from various viewpoints (aerial, lateral) and body parts (head, fluke, peduncle). The system is now operational on the Flukebook platform, serving both North Atlantic and southern right whales. A paper detailing this work is currently under review at Mammalian Biology. This automated identification system enhances the efficiency and accuracy of whale monitoring and conservation efforts.

  • 10. Coral Reef Watch

    NOAA Coral Reef Watch (CRW) has been using remote sensing, modeled, and in situ data for over 20 years to operate a Decision Support System (DSS) aimed at helping resource managers, researchers, decision makers, and other stakeholders worldwide prepare for and respond to coral reef ecosystem stressors, mainly due to climate change and ocean warming. CRW offers the world’s only global early-warning system for coral reef ecosystem physical environmental changes. It remotely monitors conditions that can lead to coral bleaching, disease, and death, providing near real-time information and early warnings to its user community. CRW also uses operational climate forecasts to predict stressful environmental conditions at specific reef locations globally. The products are primarily based on sea surface temperature (SST) but also include light and ocean color data, among other variables. This system supports the conservation and management of coral reefs by providing timely and accurate information.

  • 11. Robotic microscopes and machine learning algorithms remotely and autonomously track lower trophic levels for improved ecosystem monitoring and assessment

    This project focuses on using robotic microscopes and machine learning algorithms to remotely and autonomously track lower trophic levels, specifically phytoplankton, which are crucial to marine food webs supporting fisheries and coastal communities. Phytoplankton respond quickly to changes in physical and chemical oceanography, and shifts in their communities can affect the entire food web. The project employs an Imaging Flow Cytobot (IFCB) to continuously collect images of phytoplankton from seawater. Automated taxonomic identification of these images is done using a supervised machine learning approach, specifically the random forest algorithm. The IFCB is deployed on both fixed platforms (docks) and roving platforms (aboard survey ships) to monitor phytoplankton communities in aquaculture areas in Puget Sound and the California Current System. The project maps the distribution and abundance of phytoplankton functional groups and their relative food value, supporting fisheries and aquaculture. It also describes changes in these communities in relation to ocean and climate variability and change.

  • 12. Edge AI survey payload development

    This project involves the development and support of an Edge AI survey payload for multispectral aerial imaging, which runs detection model pipelines in real-time. The payload consists of nine cameras (color, infrared, ultraviolet) controlled by dedicated on-board computers with GPUs. YOLO detection models are used to process imagery faster than it is collected, enabling real-time analysis. The primary goals are to reduce the overall data burden by terabytes and shorten the data processing timeline, thereby expediting analysis and population assessment for Arctic mammals. This technology enhances the efficiency and effectiveness of aerial surveys and wildlife monitoring.

  • 13. Ice seal detection and species classification in multispectral aerial imagery

    This project aims to refine and improve the detection and classification pipelines for ice seals in multispectral aerial imagery. The primary objectives are to reduce false positive rates to less than 50% while maintaining over 90% accuracy. Additionally, the project seeks to significantly reduce or eliminate the labor-intensive post-survey review process. By enhancing the accuracy and efficiency of these pipelines, the project aims to improve the monitoring and conservation of ice seal populations.

  • 14. First Guess Excessive Rainfall Outlook

    The First Guess Excessive Rainfall Outlook is a machine learning product designed to provide an initial estimate for the Weather Prediction Center’s (WPC) Excessive Rainfall Outlook. It is developed using data from the Excessive Rainfall Outlook (ERO) along with various atmospheric variables. This product focuses on providing predictions for days 4-7, helping meteorologists and decision-makers prepare for potential excessive rainfall events. The use of machine learning enhances the accuracy and reliability of these forecasts.

  • 15. Automated detection of hazardous low clouds in support of safe and efficient transportation

    The automated detection of hazardous low clouds supports safe and efficient transportation by providing accurate and timely information to meteorologists and aviation professionals.This project focuses on the maintenance and sustainment of the operational fog/low stratus (FLS) products. These products are created by combining satellite imagery with Numerical Weather Prediction (NWP) data using machine learning techniques. The FLS products are available in the Advanced Weather Interactive Processing System (AWIPS) and are routinely used by the National Weather Service (NWS) Aviation Weather Center and Weather Forecast Offices.

  • 16. The Development of ProbSevere v3 - An improved nowcasting model in support of severe weather warning operations

    The Development of ProbSevere v3 focuses on enhancing a machine learning model that uses numerical weather prediction (NWP), satellite, radar, and lightning data to nowcast severe weather events like wind, hail, and tornadoes. Initially transitioned to National Weather Service (NWS) operations in October 2020, ProbSevere has proven effective in improving severe weather warnings. The new version, ProbSevere v3, incorporates additional datasets and advanced machine learning techniques to further enhance its capabilities. Successfully demonstrated in the 2021 Hazardous Weather Testbed, a proposal has been submitted to facilitate its operational update. This development is funded by the GOES-R program, aiming to provide more accurate and timely severe weather forecasts.

  • 17. The VOLcanic Cloud Analysis Toolkit (VOLCAT): An application system for detecting, tracking, characterizing, and forecasting hazardous volcanic events

    The VOLcanic Cloud Analysis Toolkit (VOLCAT) is designed to detect, track, characterize, and forecast hazardous volcanic events, particularly volcanic ash, which poses significant risks to aviation. VOLCAT includes several AI-powered satellite applications for eruption detection, alerting, and volcanic cloud tracking. These tools are regularly used by Volcanic Ash Advisory Centers to issue advisories. The project aims to further develop VOLCAT products and transition them to the NESDIS Common Cloud Framework, ensuring compliance with new International Civil Aviation Organization (ICAO) requirements. This enhancement will improve the accuracy and reliability of volcanic ash monitoring and forecasting, thereby enhancing aviation safety.

  • 18. ProbSR (probability of subfreezing roads)

    ProbSR is a machine learning algorithm designed to predict the probability of roads being subfreezing, providing a 0-100% likelihood. This tool is essential for improving road safety by helping authorities and drivers anticipate and respond to hazardous driving conditions caused by subfreezing temperatures. By offering precise and timely predictions, ProbSR aids in better decision-making for road maintenance and travel safety.

  • 19. VIAME: Video and Image Analysis for the Marine Environment Software Toolkit VIAME

    VIAME, or the Video and Image Analysis for the Marine Environment Software Toolkit, is an open-source, modular toolkit that enables users to apply advanced deep-learning algorithms for automated image annotation through a user-friendly graphical interface. Available free of charge to all NOAA users, VIAME supports a wide range of applications in marine research and monitoring. The National Oceanic and Atmospheric Association (NOAA) Fisheries Office of Science and Technology funds an annual maintenance contract that includes technical and customer support, routine software updates, bug fixes, and development efforts to meet diverse application needs across different centers. This toolkit enhances the efficiency and accuracy of marine imagery analysis, supporting various research and conservation initiatives.

  • 20. ENSO Outlooks using observed/analyzed fields

    This project employs a Long Short-Term Memory (LSTM) model to use ocean and atmospheric predictors across the tropical Pacific for forecasting the Oceanic Niño Index (ONI) values up to one year in advance. An extension of this project has been proposed to the cloud portfolio, aiming to incorporate a Convolutional Neural Network (CNN) layer that utilizes reforecast data to enhance the accuracy of ONI predictions. This advanced modeling approach helps improve the understanding and forecasting of El Niño-Southern Oscillation (ENSO) events, which are critical for climate prediction and planning.

  • 21. Using community-sourced underwater photography and image recognition software to study green sea turtle distribution and ecology in southern California

    This project aims to study the distribution and ecology of green sea turtles in and around La Jolla Cove in the San Diego region, a popular ecotourism site. By engaging local photographers to collect underwater images of green turtles, the project uses publicly available facial recognition software (HotSpotter) to identify individual turtles. This identification helps determine population size, residency patterns, and foraging ecology. The involvement of the community in data collection enhances the scope and accuracy of the research, contributing valuable insights into the conservation of green sea turtles.

  • 22. An Interactive Machine Learning Signals in Passive Acoustic Recordings Toolkit for Classifying Species Identity of Cetacean Echolocation

    This project focuses on developing robust automated machine learning tools for detecting and classifying the echolocation clicks of toothed whales and dolphins, covering up to 20 species found in the Gulf of Mexico. Funded from June 2018 to May 2021, the tools will be used for automated analysis of long-term recordings from passive acoustic moored instruments deployed across the Gulf from 2010 to 2025. The goal is to understand the environmental processes driving trends in marine mammal density and distribution. These advanced tools will enhance the efficiency and accuracy of marine mammal monitoring and conservation efforts.

  • 23. Steller sea lion automated count program

    The Steller Sea Lion Automated Count Program, led by NOAA Fisheries Alaska Fisheries Science Center’s Marine Mammal Laboratory (MML), is tasked with monitoring the endangered western Steller sea lion population in Alaska. MML conducts annual aerial surveys of known sea lion sites along the southern Alaska coastline, capturing visual imagery. Traditionally, two full-time counters manually process overlapping images to avoid double counting and classify individuals by age and sex. These counts are crucial for population and ecosystem-based modeling, informing sustainable fishery management decisions, and are highly anticipated by stakeholders. MML has collaborated with Kitware to develop detection and image registration pipelines using VIAME, updating the DIVE program to meet new interface needs. Currently, MML is assessing the efficacy of these algorithms and developing a workflow to enhance the traditional counting method, aiming to improve accuracy and efficiency in monitoring efforts.

  • 24. Steller sea lion brand sighting

    The Steller Sea Lion Brand Sighting project focuses on detecting and identifying branded Steller sea lions using remote camera images in the western Aleutian Islands, Alaska. The primary goal is to streamline the photo processing workflow, reducing the effort required to review images. By automating the detection and identification process, the project aims to enhance the efficiency and accuracy of monitoring branded sea lions, contributing to better management and conservation of the species.

  • 25. Picky

    The Picky project utilizes Convolutional Neural Networks (CNN) to identify objects of specific sizes from side scan imagery. This technology provides users with a probability score, enabling the automation of contact picking in the field. Side scan imagery, which is a single-channel intensity image, is well-suited for basic CNN techniques. This automation enhances the efficiency and accuracy of object detection in various field applications.

  • 26. WAWENETS

    WAWENETS is an algorithm designed to estimate the quality and intelligibility of speech in telecommunications. It processes digital recordings of speech from telecommunications systems and outputs a single number indicating speech quality (on a 1 to 5 scale) or speech intelligibility (on a 0 to 1 scale). This tool helps in assessing and improving the performance of telecommunications systems by providing clear metrics for speech quality and intelligibility.

  • 27. AI retrieval for patent search

    The AI retrieval for patent search project aims to enhance the next-generation patent search tool by assisting examiners in identifying relevant documents and additional search areas. The system processes inputs from both published and unpublished applications and provides recommendations on further prior art areas to explore. Users can sort these recommendations by similarity to concepts of their choosing, making the patent search process more efficient and comprehensive.

  • 28. AI use for CPC classification

    28.AI use for CPC classification This project involves an AI system that classifies incoming patent applications according to the Cooperative Patent Classification (CPC) scheme. It assists in the operational assignment of work and recommends classification symbols for AI searches. The back-office processing system takes incoming patent applications as input and outputs the corresponding classification symbols, streamlining the patent classification process and improving accuracy.

  • 29. AI retrieval for TM design coding and Image search

    The AI retrieval for Trademark (TM) design coding and image search project uses a Clarivate Commercial Off-The-Shelf (COTS) solution to help examiners identify similar trademark images. It suggests the correct assignment of mark image design codes and assesses the potential acceptability of goods and services identifications. The system processes both incoming and registered trademark images, outputting design codes and related images, thereby enhancing the efficiency and accuracy of trademark examination.

  • 30. Enriched Citation

    The Enriched Citation project is a data dissemination system that identifies references or prior art cited in specific patent application office actions. It includes bibliographic information of the references, the claims they were cited against, and the relevant sections relied upon by the examiner. The system extracts this information from unstructured office actions and provides it through a structured public-facing API, making it easier to access and understand citation data in patent applications.

  • 31. Inventor Search Assistant (iSAT)

    The Inventor Search Assistant (iSAT) is a service designed to help inventors begin the process of identifying relevant documents, figures, and classification codes for conducting a novelty search. Users enter a short description of their invention, and the system provides a selectable set of recommended documents, figures, and classification areas. This tool simplifies the initial stages of patent research, making it more accessible for inventors.

Conclusion

AI's integration into the commerce industry offers a multitude of benefits that drive operational efficiency, enhance customer satisfaction, and support strategic decision-making. By leveraging AI-powered chatbots, businesses can provide rapid and personalized customer support, while advanced tools like the Consolidated Screening List and Market Diversification Toolkit simplify compliance and market analysis. The development of innovative AI solutions, such as automated species identification and real-time environmental monitoring, exemplifies how AI can be applied to diverse fields within commerce, from trade administration to environmental sustainability. These AI applications not only improve accuracy and efficiency but also enable businesses to stay ahead in a competitive landscape. By adopting AI technologies, organizations can better manage resources, predict market trends, and ensure compliance with regulations, ultimately leading to enhanced performance and growth. The transformative impact of AI in commerce underscores its role as a critical enabler of modern business practices and a catalyst for future advancements in the industry.

Discuss a Use Case

Fill in your details & we will get back to you shortly.