Federal AI Use Case Inventory
Enjoy exploring the Federal AI Use Case Inventory in our dynamic table.
| Agency | Bureau/Component | Use Case ID | Use Case Name | Stage of Development (Raw) | Use Case Topic Area | Stage of Development | Is the AI use case high-impact? (Raw) | High-impact? | Justification | AI Classification | What problem is the AI intended to solve? | What are the expected benefits and positive outcomes from the AI for an agency's mission and/or the general public? | Describe the AI system's outputs. | Date when AI use case became operational or the pilot's start date | Was the system involved in this use case purchased from a vendor or developed under contract(s) or in-house? | Vendor(s) Name | Does this AI use case have an associated Authorization to Operate (ATO)? | System(s) Name | Describe any data used to train, fine-tune, and/or evaluate performance of the model(s) used in this use case. | If the data is required to be publicly disclosed as an open government data asset, provide a link to the entry on the Federal Data Catalog. | Does this AI use case involve personally identifiable information (PII) that is maintained by the agency? | If publicly available, provide the link to the AI use case's associated Privacy Impact Assessment (PIA). | Which, if any, demographic variables does the AI use case explicitly use as model features? | Does this project include custom-developed code? | If the code is open source, provide the link for the publicly available source code. | Has pre-deployment testing been conducted for this AI use case? | Has an AI impact assessment been completed for this AI use case? | What are the potential impacts of using the AI for this particular use case and how were they identified? | Has an independent review of the AI use case been conducted? | Is there a process to conduct ongoing monitoring to identify any adverse impacts to the performance and security of the AI functionality, as well as to privacy, civil rights, and civil liberties? | Has the agency established sufficient and periodic training for operators of the AI to interpret and act on the its output and managed associated risks? | Does this AI use case have an appropriate fail-safe that minimizes the risk of significant harm? | Is there an established appeal process in the event that an impacted individual would like to appeal or contest the AI system's outcome? | What steps has the agency taken to consult and incorporate feedback from end users of this AI use case and the public? |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Commodity Futures Trading Commission | DOD | CFTC-001 | Anomaly Detection for Data Quality | Operation and Maintenance | Deployed | Anomaly detection model designed to identify potentially erroneous data loads in TCR data. Uses an isolation forest model with aggregated data and automatically runs daily. | ||||||||||||||||||||||||||||
| Commodity Futures Trading Commission | DCR | CFTC-003 | Stress Testing Scenarios with Deep Learning | Acquisition and/or Development | Pre-deployment | Pilot project to explore neural-network based machine learning methods for creating stress testing scenarios and estimating PnL (profit and loss) on FO (Futures and Options) portfolios based on the current/recent market states/conditions. | ||||||||||||||||||||||||||||
| Commodity Futures Trading Commission | MPD | CFTC-004 | MPD Entity Risk Modeling | Initiated | Pre-deployment | Entity-level risk modeling project. Use statistical and probabalistic models to predict firms experiencing changes in capital levels. Very early stage R&D effort. | ||||||||||||||||||||||||||||
| Department Of Agriculture | Administrative and Financial Management | USDA-001 | Repair Spend | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | The intended purpose of this model is to review financial documents and then classify each expense as money spent on "facility repairs" or "not facility repairs". The expected benefits include reduction of manual hours identifying the types of transactions. | The output of the model is a recommendation of which financial transactions should be identified as "repair" expenses. | 10/01/2019 | Developed with both contracting and in-house resources | The output of the model is a recommendation of which financial transactions should be identified as "repair" expenses. | None; | ||||||||||||||||||||
| Department Of Agriculture | Office of National Programs | USDA-002 | ARS Project Mapping | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Science & Space | Deployed | No | Not high-impact | The intended purpose of this model is to process research plans from various research program portfolios in the Agricultural Research Service (ARS) to find patterns and opportunities between projects. The expected benefits include decreasing the time that humans would spend to manually read, pull out key terms, and group the projects by topic. The model may also find patterns that a human might miss. | The model outputs groups of similar projects and project terms. The output includes metrics (silhouette scores, term rank, importance scores) that show how well the projects and terms in a group match. | 01/01/2020 | Developed with contracting resources | The model outputs groups of similar projects and project terms. The output includes metrics (silhouette scores, term rank, importance scores) that show how well the projects and terms in a group match. | None; | ||||||||||||||||||||
| Department Of Agriculture | National Agricultural Library | USDA-003 | NAL Automated Indexing | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Science & Space | Deployed | No | Not high-impact | This system automatically assigns word tags to agricultural research articles from a controlled list of terms provided by the National Agricultural Library Thesaurus (NALT). The tags can be used to look up and retrieve articles. Using these tags benefits users by making it easier to find the content they are looking for. | The model outputs terms to use as search tags that are specific to the article that the model analyzed. | 06/01/2011 | Developed with both contracting and in-house resources | The model outputs terms to use as search tags that are specific to the article that the model analyzed. | None; | ||||||||||||||||||||
| Department Of Agriculture | Plant Protection and Quarantine | USDA-004 | Predictive Modeling of Invasive Pest Species | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | The purpose of the model is to check how likely it is for imported agricultural products from other countries to have pests. Benefits include more reliable discovery and quarantine of invasive pests, preventing pest invasion and making trade safer. | The model outputs are a prediction of whether a product carries an invasive species and what invasive species category the pest belongs to. | 07/01/2015 | Developed in-house | The model outputs are a prediction of whether a product carries an invasive species and what invasive species category the pest belongs to. | None; | ||||||||||||||||||||
| Department Of Agriculture | Plant Protection and Quarantine | USDA-005 | Detection of Pre-symptomatic HLB Infected Citrus | Stage 5 - Retired (Use case has been retired or is in the process of being retired) | Science & Space | Retired | The purpose of the model is to detect citrus trees infected with Huanglongbing (HLB) disease using images collected by a camera sensor on a small drone. This system would decrease time and cost associated with manual searching for HLB infected trees. | The model outputs GPS Coordinates of potential Huanglongbing (HLB) infected areas. | The model outputs GPS Coordinates of potential Huanglongbing (HLB) infected areas. | |||||||||||||||||||||||||
| Department Of Agriculture | Plant Protection and Quarantine | USDA-006 | High Throughput Phenotyping in Citrus Orchards | Stage 5 - Retired (Use case has been retired or is in the process of being retired) | Science & Space | Retired | The main purpose of this system is to analyze drone images to locate, count, and categorize citrus trees in an orchard to monitor orchard health. This use case saves thousands of man-hours searching for signs of plant damage and disease in orchards. | The model output flags images containing plant damage or disease. | The model output flags images containing plant damage or disease. | |||||||||||||||||||||||||
| Department Of Agriculture | Plant Protection and Quarantine | USDA-007 | Detection of Aquatic Weeds | Stage 5 - Retired (Use case has been retired or is in the process of being retired) | Science & Space | Retired | The purpose of this system is to locate and identify aquatic weed species using images from drones. Expected benefits include decreasing time that would have been spent manually reviewing the images. | The model outputs the aquatic weed species contained in the image. | The model outputs the aquatic weed species contained in the image. | |||||||||||||||||||||||||
| Department Of Agriculture | Plant Protection and Quarantine | USDA-008 | Automated Detection & Mapping of Host Plants from Ground Level Imagery | Stage 5 - Retired (Use case has been retired or is in the process of being retired) | Science & Space | Retired | This system generates maps of specific tree species from ground-level (streetview) images. Expected benefits are decreased time and cost associated with manual collection of the data. | The model outputs GPS coordinates of flagged locations. | The model outputs GPS coordinates of flagged locations. | |||||||||||||||||||||||||
| Department Of Agriculture | Strategic Planning and Business Services Division | USDA-009 | Democratizing Data | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | This system scans collections of published documents to find how publicly-funded data and evidence are used to serve science and society. This helps the National Agricultural Statistics Service and the Economic Research Service understand who is using their data and why. This improves customer service, helps evaluate programs, and answers important questions for planning and learning. | The model outputs text containing the identified dataset reference information. | 03/08/2021 | Developed with contracting resources | The model outputs text containing the identified dataset reference information. | None; | ||||||||||||||||||||
| Department Of Agriculture | Geospatial Enterprise Operations | USDA-011 | Land Change Analysis Tool (LCAT) | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Mission-Enabling (Internal Agency Support) | Pilot | No | Not high-impact | The Land Change Analysis Tool (LCAT) creates high resolution maps to help make land use decisions. For example, it has been used to monitor eastern redcedar for about 40 years in South Dakota and to support wildlife hazard assessments at airports with various organizations. This tool reduced the labor hours needed by the Farm Service Agency (FSA) to review land data accuracy in Georgia by 100 times. | The model outputs land cover maps. | 10/01/2018 | Developed in-house | The model outputs land cover maps. | None; | ||||||||||||||||||||
| Department Of Agriculture | Office of the Chief Data Officer | USDA-012 | OCIO/CDO Council Comment Analysis Tool | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | This prototype helps reviewers identify the main topics and themes of comments, and then group similar comments together. This makes the comment review process more efficient by providing new insights and speeding up comment processing. Benefits include reducing repeated development efforts across the government and saving costs. | The model outputs groups of comments categorized by topic and similarity. | 12/01/2020 | Developed with contracting resources | The model outputs groups of comments categorized by topic and similarity. | None; | ||||||||||||||||||||
| Department Of Agriculture | Office of Retailer Operations & Compliance | USDA-013 | Retailer Receipt Analysis | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Government Services (includes Benefits and Service Delivery) | Pre-deployment | No | Not high-impact | This system uses optical character recognition (OCR) to convert physical inventory documentation into digital text. This makes the review of inventory documents more efficient and consistent. | The model outputs digital text of inventory documentation and distinguishes food items and categories. | 10/01/2021 | Developed with contracting resources | The model outputs digital text of inventory documentation and distinguishes food items and categories. | None; | ||||||||||||||||||||
| Department Of Agriculture | Research and Development | USDA-014 | Ecosystem Management Decision Support System (EMDS) | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Energy & the Environment | Deployed | No | Not high-impact | This system provides decision support for environmental analysis and planning by using AI-powered tools in ArcGIS and QGIS. Use of this system empowers stakeholders to make more informed and effective decisions about natural resource management. | Outputs from this system include the identification of landscapes in need of management/maintenance, along with suggested management actions based on considerations such as cost, efficacy, and policy. | 01/01/1994 | Developed with contracting resources | Outputs from this system include the identification of landscapes in need of management/maintenance, along with suggested management actions based on considerations such as cost, efficacy, and policy. | None; | ||||||||||||||||||||
| Department Of Agriculture | Northern Research Station | USDA-016 | Cross-Laminated Timber (CLT) Knowledge Database | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Energy & the Environment | Deployed | No | Not high-impact | This system enables researchers, practitioners, and the public to find specialized information about timber products. Benefits include faster information sharing and less time spent on manual searches. | System outputs are webpage links from the timber knowledge database. | 12/01/2017 | Developed with contracting resources | System outputs are webpage links from the timber knowledge database. | None; | ||||||||||||||||||||
| Department Of Agriculture | Rocky Mountain Research Station | USDA-017 | Raster Tools | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Science & Space | Pilot | No | Not high-impact | This system will make machine learning techniques available for geospatial applications. Benefits include standardization of methods, improved work quality, and increased user productivity. | The system API (Application Programming Interface) provides various AI outputs, usually in the form of raster images and data tables. | 08/01/2021 | Developed in-house | The system API (Application Programming Interface) provides various AI outputs, usually in the form of raster images and data tables. | None; | ||||||||||||||||||||
| Department Of Agriculture | Rocky Mountain Research Station, Missoula Fire Sciences Lab | USDA-018 | TreeMap and FuelMap (all versions) | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Energy & the Environment | Deployed | No | Not high-impact | TreeMap provides a detailed model of the forests in the US. It is used for measuring carbon, planning fuel treatments, starting landscape vegetation models, assessing fire effects, and more. Users include the US Forest Service, private companies, and state governments. | TreeMap produces a detailed map of a plot of forest and a database table listing individual tree records or fuel characteristics for each plot. | 01/01/2010 | Developed in-house | TreeMap produces a detailed map of a plot of forest and a database table listing individual tree records or fuel characteristics for each plot. | None; | ||||||||||||||||||||
| Department Of Agriculture | Geospatial Office | USDA-019 | Landscape Change Monitoring System (LCMS) | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Science & Space | Deployed | No | Not high-impact | This project monitors large areas for changes in land cover and land use over time. The benefits include creating a consistent method for tracking changes in the landscape. | The model outputs predictions of vegetation gain, vegetation loss, land cover, and land uses. | Developed in-house | The model outputs predictions of vegetation gain, vegetation loss, land cover, and land uses. | None; | |||||||||||||||||||||
| Department Of Agriculture | Geospatial Office | USDA-021 | Forest Health Detection Monitoring | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Energy & the Environment | Pilot | No | Not high-impact | This project monitors forest health by detecting tree damage through changes in light patterns collected by satellites. This detection method helps the Forest Health Protection program monitor areas that can't be checked on the ground or with aerial surveys. | The model outputs the stage of forest health based on the image, along with a map (polygons) of the area for monitoring. | Developed with both contracting and in-house resources | The model outputs the stage of forest health based on the image, along with a map (polygons) of the area for monitoring. | None; | |||||||||||||||||||||
| Department Of Agriculture | Research and Development Division | USDA-022 | Cropland Data Layer | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Agricultural Statistics | Deployed | No | Not high-impact | This project produces supplemental estimates of crop acreage and releases geospatial data products to the user community. | The system outputs are an acreage estimate and agrigulture-specific land cover product. | 01/01/2008 | Developed in-house | The system outputs are an acreage estimate and agrigulture-specific land cover product. | None; | ||||||||||||||||||||
| Department Of Agriculture | Frames Maintenance | USDA-023 | List Frame Deadwood Identification | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Agricultural Statistics | Deployed | No | Not high-impact | This model helps identify farms that may be out of business on the National Agricultural Statistics Service list. Parts of the model were used to create clear rules to identify these farms. The resulting list is more accurate and allows for smaller sample sizes, reducing the burden on respondents. | The output of the model was a probability score that a farm is out of business. | 02/04/2014 | Developed in-house | The output of the model was a probability score that a farm is out of business. | Age; | ||||||||||||||||||||
| Department Of Agriculture | Planning, Accountability and Reporting Staff and Institute of Bioenergy, Climate and Environment | USDA-024 | Climate Change Classification NLP | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | The Climate Change Classification Natural Language Processing (NLP) model identifies likely climate-related projects within National Institute of Food and Agriculture's (NIFA) large and diverse funding portfolio. Expected benefits include reduced labor hours for reporting and increased repeatability and accuracy of reporting. | Model output is a list of climate change projects classified as "climate change related" or "not climate change related" for National Institute of Food and Agriculture (NIFA) internal project review/adjudication and reporting. | 07/01/2021 | Developed with both contracting and in-house resources | Model output is a list of climate change projects classified as "climate change related" or "not climate change related" for National Institute of Food and Agriculture (NIFA) internal project review/adjudication and reporting. | None; | ||||||||||||||||||||
| Department Of Agriculture | Facility Protection Division (FPD) | USDA-025 | Video Surveillance System | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | The purpose of this system is to conduct facial recognition video surveillance to provide enhanced security. Benefits include reduced labor hours for technicians and augmented surveillance capability. | The system outputs a positive match to the security control center, indicating identification of the selected individual. An alarm notification is sent to alert security personnel. | Developed with both contracting and in-house resources | The system outputs a positive match to the security control center, indicating identification of the selected individual. An alarm notification is sent to alert security personnel. | Sex/Gender; Race/Ethnicity; | |||||||||||||||||||||
| Department Of Agriculture | Office of the Chief Information Officer | USDA-026 | Aquisition Approval Request Compliance Tool | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | This project was developed to help identify likely Information Technology (IT) purchases that do not have an associated Acquisition Approval Request. The benefits are reducing unauthorized IT purchases and increasing compliance with IT procurement procedures and approvals. | The output is a score indicating how likely it is that the purchase is an Information Technology (IT) purchase. | Developed with both contracting and in-house resources | The output is a score indicating how likely it is that the purchase is an Information Technology (IT) purchase. | None; | |||||||||||||||||||||
| Department Of Agriculture | National Water and Climate Center | USDA-027 | Operational Water Supply Forecasting for Western US Rivers | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | Yes | High-impact | The National Water and Climate Center has a multi-model machine-learning metasystem (M4) for generating water supply forecasts. This model uses AI and other data-science technologies to reduce forecast errors, helping stakeholders make better decisions about water supply availability. | The model outputs water supply forecasts. | 12/01/2019 | Developed in-house | The model outputs water supply forecasts. | None; | ||||||||||||||||||||
| Department Of Agriculture | Plant Protection and Quarantine | USDA-028 | Standardization of Cut Flower Business Names for Message Set Data | Stage 5 - Retired (Use case has been retired or is in the process of being retired) | Mission-Enabling (Internal Agency Support) | Retired | Natural Language Processing (NLP) is used to match names of producers to varietal information for cut flowers, which will help convert from manual to automated inspection systems. The main benefit of automation is that it can manage thousands of entities, which would be impossible to handle manually. | The model outputs before and after lists of producer names and cut flower varieties. | The model outputs before and after lists of producer names and cut flower varieties. | |||||||||||||||||||||||||
| Department Of Agriculture | Digital Infrastructure Services Center | USDA-029 | Intelligent Ticket Routing | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | Help desk tickets are often sent to the wrong group and must be manually re-routed to the correct group, which can take time, resources, and may delay issue resolution. The Intelligent Ticket Routing system helps to send the ticket to the correct group, increasing customer satisfaction by reducing the number of times a customer is transferred or placed on hold, and decreasing the average handle time (AHT). In our specific use case, we reduce the time taken to route a ticket to the appropriate group, shortening the time required to resolve an issue. | The system outputs a prediction of the appropriate group for ticket management. | 01/01/2022 | Developed with both contracting and in-house resources | The system outputs a prediction of the appropriate group for ticket management. | None; | ||||||||||||||||||||
| Department Of Agriculture | Digital Infrastructure Services Center | USDA-030 | Predictive Maintenance Impacts | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | A natural language processing (NLP) model classifies whether infrastructure maintenance changes will or will not cause an incident at the Digital Infrastructure Services Center (DISC). Using this system, the business can improve the review process or address specific needs within groups. This will lead to process improvements, increased productivity, higher performance and job satisfaction, higher client satisfaction, and better achievement of key performance indicators (KPIs). | The model outputs a score between 0 to 1. Closer to 1 indicates higher likelihood of an incident created by the proposed change. | 03/01/2020 | Developed with both contracting and in-house resources | The model outputs a score between 0 to 1. Closer to 1 indicates higher likelihood of an incident created by the proposed change. | None; | ||||||||||||||||||||
| Department Of Agriculture | Center for Civil Rights Operations; Data Records and Management Division | USDA-031 | Artificial Intelligence SPAM Mitigation Project | Stage 5 - Retired (Use case has been retired or is in the process of being retired) | Government Services (includes Benefits and Service Delivery) | Retired | An AI/ML model automatically classifies and removes spam and marketing emails from civil rights complaints email channels. Benefits include reducing the time spent manually managing email channels, decreasing the memory burden on email systems, and lowering the risk from malicious emails. | The model outputs a classification of received emails, flagging spam, marketing, and phishing emails. | The model outputs a classification of received emails, flagging spam, marketing, and phishing emails. | |||||||||||||||||||||||||
| Department Of Agriculture | Plant Protection and Quarantine | USDA-032 | Approximate String Matching (aka fuzzy matching) to Standardize Data | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | A model is used to replace typos in Plant Protection and Quarantine (PPQ) program data using a list of standardized producer and commodity names. This results in clean, standardized data through an automated workflow. Benefits include reducing labor hours compared to manual data cleaning, makes near-real-time reporting possible, and accurate data enables program managers to conduct efficient policy enforcement and program monitoring. | The model outputs corrected text data. | 02/01/2023 | Developed in-house | The model outputs corrected text data. | None; | ||||||||||||||||||||
| Department Of Agriculture | Plant Protection and Quarantine | USDA-033 | Automated PDF Document Processing and Information Extraction | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | This use case takes program and workforce related information stored in thousands of PDFs and converts the information into data tables that can be used for analytics and dashboards. This makes information that is difficult to find available in real-time to support decision making and saves large amounts of time compared to previous methods used. | The model outputs structured database tables. | Developed in-house | The model outputs structured database tables. | None; | |||||||||||||||||||||
| Department Of Agriculture | Research and Development Division | USDA-035 | Census Propensity Scores via ML | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Agricultural Statistics | Deployed | No | Not high-impact | This model predicts how likely individuals or operations are to complete the Census of Agriculture. The predictions can help data collectors decide where they need to focus their efforts in order to get more complete census responses. | The model outputs a probability score (all values from and including 0 to 1). | 10/01/2022 | Developed in-house | The model outputs a probability score (all values from and including 0 to 1). | Zipcode; | ||||||||||||||||||||
| Department Of Agriculture | Soil Science and Resource Assessment | USDA-036 | Ecological Site Descriptions (Machine Learning) | Stage 5 - Retired (Use case has been retired or is in the process of being retired) | Mission-Enabling (Internal Agency Support) | Retired | This AI/ML work conducts analysis of over 20 million records of soils data and 20,000 text documents of ecological information in order to provide complete soil based ecological information for the country. Benefits include reduction in labor hours manually analyzing documents, and enabling stakeholders to examine records in ways previously not thought of to make more informed decisions. | The AI model outputs ecological soil classifications and mappings. | The AI model outputs ecological soil classifications and mappings. | |||||||||||||||||||||||||
| Department Of Agriculture | Resource Inventory and Assessment Division | USDA-037 | Conservation Effects Assessment Project | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | The purpose of the use case is to predict the conservation effects of cropland practices in real time, with no technical skill required. Such models would allow field conservation planners to have real-time conservation effects on sediment and nutrients. | The model outputs predictions of sediment and nutrient change values based on conservation methods. | 11/01/2021 | Developed in-house | The model outputs predictions of sediment and nutrient change values based on conservation methods. | None; | ||||||||||||||||||||
| Department Of Agriculture | Resource Inventory and Assessment Division | USDA-038 | Digital Imagery (no-change) for NRI Program | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Mission-Enabling (Internal Agency Support) | Pre-deployment | AI algorithms are used to look at landscape images and detect if the landscape has changed from year to year. Currently, about 72000 aerial images are interpreted by dozens of technicians each year to collect data for the National Resources Inventory (NRI) program. This case would decrease the number of labor hours required of technicians to manually interpret images. | The model outputs a classification of “no-changes” in the images if the landscape in the images remain stable from year to year. | 10/01/2022 | The model outputs a classification of “no-changes” in the images if the landscape in the images remain stable from year to year. | ||||||||||||||||||||||||
| Department Of Agriculture | Regional Operations & Support; Mountain Plains Regional Office | USDA-039 | Nutrition Education & Local Access Dashboard | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Government Services (includes Benefits and Service Delivery) | Deployed | No | Not high-impact | The goal of this dashboard is to provide county-level information on nutrition education and local food access, alongside other metrics related to hunger and nutritional health. This interactive dashboard can provide specific details based on the properties of farm to school intensity and size, program activity intensity, ethnicity and race, fresh food access, school size, and program participation. These properties allow users to find similar states based on any of these characteristics, opening up opportunities for partnerships with states they may not have considered. Benefits include increasing stakeholder awareness and empowering more informed decision-making and collaboration. | The model outputs groups of similar counties/states based on the different combinations of properties available for states. | 11/09/2022 | Developed with both contracting and in-house resources | The model outputs groups of similar counties/states based on the different combinations of properties available for states. | Race/Ethnicity; | ||||||||||||||||||||
| Department Of Agriculture | Methods Division | USDA-040 | Survey Text Remarks Value Scoring | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | The purpose of this use case is to analyze a large amout of text in survey responses and score all comments with a priority value. The highly scored blocks of text then get prioritized for review by a human and are responded to more quickly than if they were to be retained in a queue. | The model outputs a value score on each snippit of text, highly scored snippits of text are placed at the front of the queue before lower scored blocks to capture text of value more quickly. | Developed in-house | The model outputs a value score on each snippit of text, highly scored snippits of text are placed at the front of the queue before lower scored blocks to capture text of value more quickly. | None; | |||||||||||||||||||||
| Department Of Agriculture | Methodology Division; Statistics Division; Regional Field Offices | USDA-041 | Survey Outlier Detection Model | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Agricultural Statistics | Deployed | No | Not high-impact | The purpose of this use case is to identify abnormal values to edit in surveys. This reduces manual labor and improves data quality. | The model outputs a recommendation of which values in a dataset should be changed. | 05/01/2022 | Developed in-house | The model outputs a recommendation of which values in a dataset should be changed. | None; | ||||||||||||||||||||
| Department Of Agriculture | Office of the Chief Technology Officer | USDA-042 | Multilingual Translation of Recalls and Public Health Alerts | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Government Services (includes Benefits and Service Delivery) | Pre-deployment | No | Not high-impact | The purpose of this system is to expand the multilingual outreach of food safety information like recalls and public health alerts. Benefits include cost savings on vendor translation services, faster messaging circulation, and increased number of available languages to the general public. | The model outputs multilingual translations created from the original english text. | Developed with both contracting and in-house resources | The model outputs multilingual translations created from the original english text. | None; | |||||||||||||||||||||
| Department Of Agriculture | Office of Public Health Science | USDA-043 | Genomic Analyses of Pathogen Subtypes | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Science & Space | Pre-deployment | No | Not high-impact | The purpose of this use case is to use machine learning (ML) methods to group foodborne germs based on patterns in their genes, then connect this information with available health data to evaluate foodborne germ risk to public health. Expected benefits include improving our understanding of important foodborne germ genes, assessing key genes and new trends, and identifying and ranking germs that are important for public health. | The model outputs predictions of high risk foodborne germ subtypes, key genetic markers by importance, and emerging trends. | 08/01/2022 | Developed in-house | The model outputs predictions of high risk foodborne germ subtypes, key genetic markers by importance, and emerging trends. | None; | ||||||||||||||||||||
| Department Of Agriculture | Office of Public Health Science | USDA-044 | Foodborne Illness Source Attribution | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Science & Space | Pre-deployment | No | Not high-impact | The Interagency Food Safety Analytics Collaboration (IFSAC) - a partnership between the Centers for Disease Control and Prevention (CDC), the U.S. Food and Drug Administration (FDA), and the Food Safety and Inspection Service (FSIS) - has used computer-based methods to predict the likely sources of foodborne illnesses in humans caused by various germs (e.g., Salmonella, Campylobacter). Expected benefits include improving our understanding of where these germs come from and how they spread, which can help in creating measures and policies to prevent or reduce illnesses and the overall impact of these diseases. | The model outputs predictions of likely sources of foodborne human illness cases, along with a confidence score of how probable it is that the illness came from the predicted source. | 08/02/2021 | Developed in-house | The model outputs predictions of likely sources of foodborne human illness cases, along with a confidence score of how probable it is that the illness came from the predicted source. | None; | ||||||||||||||||||||
| Department Of Agriculture | MRPIT Data & Analytics Directorate | USDA-045 | Public Comments Analysis | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | The purpose of the model is to automate the analysis of comments from regulations.gov to help personnel in their review and response tasks. Benefits include a reduction in the number of labor hours needed for review and response. | The model outputs text analysis and categorization of the public comments. | 11/01/2023 | Developed in-house | The model outputs text analysis and categorization of the public comments. | None; | ||||||||||||||||||||
| Department Of Agriculture | Rangeland Management Research Unit (Las Cruces) | USDA-046 | Rangeland Analysis Platform | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Energy & the Environment | Deployed | No | Not high-impact | The Rangeland Analysis Platform (RAP) allows users to track changes in plant growth and coverage over time. By monitoring the condition of agricultural ecosystems and the impact of conservation efforts, it can guide conservation practices for wildlife habitats, carbon assessments, and tax assessments. | The system outputs estimated fractional plant cover and net primary productivity estimates. | 04/01/2022 | Developed in-house | The system outputs estimated fractional plant cover and net primary productivity estimates. | None; | ||||||||||||||||||||
| Department Of Agriculture | Research and Development Division; Methodology Division; Regional Field Offices | USDA-047 | Predictive Cropland Data Layer | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Agricultural Statistics | Deployed | No | Not high-impact | The purpose of this system is to predict crop rotations. Benefits include improving data quality of area-based surveys. | The system outputs predictions of the types of crops in specific locations within the Conterminous United States (CONUS). | 01/01/2021 | Developed in-house | The system outputs predictions of the types of crops in specific locations within the Conterminous United States (CONUS). | None; | ||||||||||||||||||||
| Department Of Agriculture | Oklahoma Natural Resources Conservation Service (NRCS) Watershed Program | USDA-048 | Dam Inspection Report Document Processing | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | The purpose of this AI is to pull out and organize data from thousands of dam inspection documents so that we can use Microsoft Power BI to understand the condition of thousands of USDA Watershed program dams. This allows us to identify the biggest issues and trends across our collection of over 2,100 dams in Oklahoma while reducing labor hours required to complete the task manually. | The model outputs text and checkbox responses, including dam metadata, inspection issue tracking (yes and no checkboxes), and further remarks on the issue or what has been/needs to be done on the dam. | 05/01/2023 | Developed in-house | The model outputs text and checkbox responses, including dam metadata, inspection issue tracking (yes and no checkboxes), and further remarks on the issue or what has been/needs to be done on the dam. | None; | ||||||||||||||||||||
| Department Of Agriculture | Information Services Division | USDA-049 | Portfolio Approval and Management (PAM) Bot | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Mission-Enabling (Internal Agency Support) | Pre-deployment | The purpose of the model is to improve the Economic Research Service (ERS) research approval process. The system reduces the time it takes to fill out information and seeking approval, improves information accuracy, and brings visibility to the approval status across various division functions. | The system provides three outputs: the approval status, summary recommendations, and generated citations. | 05/01/2024 | The system provides three outputs: the approval status, summary recommendations, and generated citations. | ||||||||||||||||||||||||
| Department Of Agriculture | Nebraska Natural Resources Conservation Service (NRCS) | USDA-050 | GIS Invasive Tree Extraction for Field Level Users | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Energy & the Environment | Deployed | No | Not high-impact | The puspose of the model is to estimate of the spread of invasive tree infestation, specifically Eastern redcedar. This helps to avoid poor or inaccurate estimates caused by time constraints and heavy workloads when manually collecting the data. | The model outputs polygons representing the extent of trees present in landscape. | Developed in-house | The model outputs polygons representing the extent of trees present in landscape. | None; | |||||||||||||||||||||
| Department Of Agriculture | Northern Research Station | USDA-051 | DISTRIB-II: Habitat Suitability of Eastern United States Tree | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Science & Space | Deployed | No | Not high-impact | The purpose and expected benefits of the Climate Change Atlas are to give forest resource managers, forest landowners, and the general public information on the current and potential future of habitats for various tree species in the eastern United States. This information can contribute to forest management decisions when considering how climate change may affect the trees currently present and how likely it is that other tree species not currently in an area might find new habitats under different climate change scenarios. | The system outputs predictions of how well a tree species can live in a certain habitat based on climate change scenarios. Maps, graphs, and reports are generated from the modeled geographic information systems (GIS) data. | 04/10/1998 | Developed in-house | The system outputs predictions of how well a tree species can live in a certain habitat based on climate change scenarios. Maps, graphs, and reports are generated from the modeled geographic information systems (GIS) data. | None; | ||||||||||||||||||||
| Department Of Agriculture | Assistant Chief Data Officers Team | USDA-052 | FSA FLP Chatbot | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | The purpose of this use case is to solve the problem of searching Loan handbooks to provide better customer service. The expected benefit is to help employees provide better service. A second benefit that is being explored is providing Veteran specific answers to services. | The expected output is text answers to prompt questions. | Developed in-house | The expected output is text answers to prompt questions. | Veteran; | |||||||||||||||||||||
| Department Of Agriculture | Insurance Services | USDA-053 | ROE Document Recognition - RoeDR | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | The purpose of this model is to analyze documents from producers and Authorized Insurance Providers (AIPs), pick the appropriate page from the documents, read the signature date and producer signature name, convert the date and name to text, and load it into an application. This feature saves us time from having to input the data manually. We can then use the data for reporting purposes. | The model outputs the producer signature and signature date within document as text. | Developed in-house | The model outputs the producer signature and signature date within document as text. | None; | |||||||||||||||||||||
| Department Of Agriculture | Biotechnology Regulatory Services | USDA-054 | IRIS | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Science & Space | Deployed | No | Not high-impact | The purpose of this system is to make literature searches more effective for Biotechnology Regulatory Services. This increases work efficiency with our regulatory tasks. | The model outputs recommended literature list for scientists. | 01/09/2023 | Developed with contracting resources | The model outputs recommended literature list for scientists. | None; | ||||||||||||||||||||
| Department Of Agriculture | Digital Infrastructure Services Center | USDA-055 | Ticket Resolution Categorization (Incident/Change) | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | The purpose of this model is to classify the resolution type and tier of all support desk tickets after they have been closed. This model allows the support team to spend more time identifying process inefficiencies and plan solutions rather than categorizing tickets. This will lead to process improvement, automation of repetitive tasks, increased productivity, and higher performance. | The model outputs the classification category of support ticket resolutions. | 06/01/2023 | Developed with both contracting and in-house resources | The model outputs the classification category of support ticket resolutions. | None; | ||||||||||||||||||||
| Department Of Agriculture | Digital Infrastructure Services Center | USDA-056 | Ticket Templatization | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | This model is a non-production exploratory model, meaning it does not make predictions but is used to explore trends within data to gain insights that can help in making data-driven decisions. It is designed to explore and analyze service and change requests submitted through the 105 general form or without templates. This model helps to identify subcategories within the larger dataset that could be candidates for standardization and automation, potentially leading to improved operational efficiency, cost savings, and customer satisfaction. | The model outputs trends within data to assist in decision making. | 01/01/2024 | Developed with contracting resources | The model outputs trends within data to assist in decision making. | None; | ||||||||||||||||||||
| Department Of Agriculture | North Dakota State Office | USDA-057 | File Rename Automation | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Mission-Enabling (Internal Agency Support) | Pilot | No | Not high-impact | The purpose of this tool is to rename thousands of documents converted from physical to digital records that were given a generic file name. This tool can grab text from page 1 of each document and apply a correct file rename instead of employees having to spend hours manually renaming documents. | The model outputs renamed files. | 11/06/2023 | Developed in-house | The model outputs renamed files. | None; | ||||||||||||||||||||
| Department Of Agriculture | Office of National Programs | USDA-058 | Rapid Drafting of ARS Research Summaries | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Mission-Enabling (Internal Agency Support) | Pre-deployment | The purpose of this tool is to summarize ongoing research from internal Agricultural Research Service (ARS) documents to allow program staff to quickly create accurate and timely summary documents, such as briefing papers, talking points for leadership, and speeches. This will give staff more time for other duties, and leadership will be able to confidently answer questions, justify budget requests, and ensure that our research is innovative and relevant. | The tool outputs talking points and short briefing papers. | The tool outputs talking points and short briefing papers. | |||||||||||||||||||||||||
| Department Of Agriculture | Soil and Plants Science Division; Soil Services and Information; Conservation Information Delivery | USDA-059 | DS Hub Geo-metadata generation | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Government Services (includes Benefits and Service Delivery) | Pre-deployment | The purpose of this AI use case is to generate metadata for Natural Resources Conservation Service (NRCS) datasets, ensuring consistency, accessibility, and compliance through generative AI. Expected benefits include increased data accessibility, reduced manual workload, minimized errors, better dataset understanding, and fast data retrieval for stakeholders. | The model outputs accurate, consistent, and compliant metadata appropriate for the existing geospatial data that it analyzed. | The model outputs accurate, consistent, and compliant metadata appropriate for the existing geospatial data that it analyzed. | |||||||||||||||||||||||||
| Department Of Agriculture | Soil and Plants Science Division; Soil Services and Information; Conservation Information Delivery | USDA-060 | Dynamic Soils Hub | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Science & Space | Pilot | No | Not high-impact | The Dynamic Soils Hub (DS Hub) under the Natural Resources Conservation Service (NRCS) is a tool designed to help both government workers and the public understand and analyze soil information. The DS Hub links different soil and conservation databases, making it easier to evaluate the environmental benefits of conservation programs by accessing previously separate data and models. This enhances the USDA’s ability to study and report on how soil properties change with conservation efforts over time. | The system outputs the class of soil based on the supplied soil information. | 11/11/2020 | Developed with contracting resources | The system outputs the class of soil based on the supplied soil information. | None; | ||||||||||||||||||||
| Department Of Agriculture | Deputy Administrator for Compliance; Business Analytics Division | USDA-061 | Cover Crop Mapping | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Law & Justice | Pre-deployment | No | Not high-impact | This project aims to annually map fall and spring cover crop practices on farms in the U.S. Midwest. These maps are made using satellite images and models of plant growth. This data helps the agency independently find out the extent of cover crop practices. | The output is a state-level map of detected cover crops by year, classified by planting date (fall, spring). | 09/02/2022 | Developed with both contracting and in-house resources | The output is a state-level map of detected cover crops by year, classified by planting date (fall, spring). | None; | ||||||||||||||||||||
| Department Of Agriculture | Deputy Administrator for Compliance; Business Analytics Division | USDA-062 | Planting Date Detection | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Law & Justice | Pre-deployment | No | Not high-impact | This project aims to find out the planting dates for corn, soybean, and winter wheat on farms in the U.S. Midwest. Maps containing planting dates are made using satellite images and models of plant growth. This data helps the agency independently verify reported planting dates on farm fields supporting efforts to ensure the integrity of their programs. | The output is an annual map of planting dates for corn, soybean, and winter wheat for crop years 2016-2023. | 09/07/2022 | Developed with both contracting and in-house resources | The output is an annual map of planting dates for corn, soybean, and winter wheat for crop years 2016-2023. | None; | ||||||||||||||||||||
| Department Of Agriculture | Deputy Administrator for Compliance; Business Analytics Division | USDA-063 | Acreage and Crop Type Validation | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Law & Justice | Pre-deployment | No | Not high-impact | This project uses satellite images and plant growth models to generate a farm field size estimate and the crop type on farms in the U.S. Midwest. This data helps the agency independently find out the accuracy of reported field sizes and crop types, supporting efforts to ensure the integrity of their programs. | The output is a validation of reported acreage and validation of reported crop type for corn, soybean, and winter wheat on farm fields. | 09/01/2022 | Developed with both contracting and in-house resources | The output is a validation of reported acreage and validation of reported crop type for corn, soybean, and winter wheat on farm fields. | None; | ||||||||||||||||||||
| Department Of Agriculture | Veterinary Services | USDA-064 | U.S. Poultry Operations and Populations Dataset | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Emergency Management | Deployed | No | Not high-impact | The purpose of this case is to develop a dataset that addresses the problem of not having complete information about where poultry farms are located and how many birds they have. Filling this gap provides detailed data on poultry farm locations and populations, which is essential for planning animal health emergencies and predicting the spread of diseases. | Output is a national-level dataset of domestic poultry operations and estimated populations. | Developed in-house | Output is a national-level dataset of domestic poultry operations and estimated populations. | None; | |||||||||||||||||||||
| Department Of Agriculture | Veterinary Services | USDA-065 | Equine Operations and Populations Dataset for the U.S. | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Emergency Management | Pre-deployment | No | Not high-impact | The purpose of this case is to develop a dataset that addresses the problem of not having complete information about where horse farms are located and how many horses they have. Filling this gap provides detailed data on horse farm locations and populations, which is essential for planning emergencies and predicting the spread of diseases. | Output is a national-level dataset of domestic horse operations and estimated populations. | 01/02/2023 | Developed with both contracting and in-house resources | Output is a national-level dataset of domestic horse operations and estimated populations. | None; | ||||||||||||||||||||
| Department Of Agriculture | National Agricultural Statistics Service | USDA-066 | NASS - Naggle 2.0 Automated Editing Tool | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Agricultural Statistics | Pre-deployment | The purpose of this model is to determine if an answer on a survey is valid or invalid. If an answer is classified as invalid, a regression model will then suggest a corrected value. This approach will help reduce errors and improve the accuracy of survey forms, saving time and reducing the number of labor hours spent on editing. | The classification model outputs an excel sheet with the survey, person, variable, and whether the variable is valid or invalid. The regression model outputs an excel sheet containing the invalid records, which includes the survey, person, variable, original value, and new predicted value. | 06/03/2024 | The classification model outputs an excel sheet with the survey, person, variable, and whether the variable is valid or invalid. The regression model outputs an excel sheet containing the invalid records, which includes the survey, person, variable, original value, and new predicted value. | ||||||||||||||||||||||||
| Department Of Agriculture | Research and Development Division | USDA-067 | County-level remotely-sensed corn and soybean yield estimation | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Agricultural Statistics | Deployed | No | Not high-impact | The purpose of this tool is to estimate yearly corn and soybean yields for each county using satellite images. More details can be found in the paper titled “An assessment of pre- and within-season remotely sensed variables for forecasting corn and soybean yields in the United States” (https://doi.org/10.1016/j.rse.2013.10.027). Benefits of providing county-level crop yield statistics allow stakeholders to make more informed planning and decisions. | The model outputs county-level crop yield estimates for corn and soybeans in the amount of bushels per acre. | 01/01/2007 | Developed in-house | The model outputs county-level crop yield estimates for corn and soybeans in the amount of bushels per acre. | None; | ||||||||||||||||||||
| Department Of Agriculture | Strategic Planning and Business Services Division | USDA-068 | NASSportal Intranet Bot | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Mission-Enabling (Internal Agency Support) | Pre-deployment | This model will assist National Agricultural Statistics Service (NASS) staff in finding answers to questions on how to administer programs. This will decrease labor hours and increase efficiency of NASS staff in program administration. | The chatbot will provide text outputs. | 07/01/2024 | The chatbot will provide text outputs. | ||||||||||||||||||||||||
| Department Of Agriculture | Research and Development; Forest Products Laboratory | USDA-069 | XyloTron/XyloPhone Wood Identification System | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Law & Justice | Pilot | No | Not high-impact | The purpose of these tools is to identify different types of wood based on their cross-section. These tools will help industries follow laws and support law enforcement in meeting national (e.g. Lacey Act) and international (e.g. CITES) regulations. | The tools will output a prediction of the type wood. | 01/01/2016 | Developed in-house | The tools will output a prediction of the type wood. | None; | ||||||||||||||||||||
| Department Of Agriculture | Procurement and Property Services | USDA-070 | Incident Invoice Document Understanding | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | The purpose of this tool is to analyze incident invoices and return the values that need to be entered into a database. This new approach leads to faster invoice processing and reduces data entry mistakes for more accurate data. | The tool outputs an Excel document containing required values identified from incident invoices. | Developed with contracting resources | The tool outputs an Excel document containing required values identified from incident invoices. | None; | |||||||||||||||||||||
| Department Of Agriculture | Northern Research Station | USDA-071 | Forest disease detection and screening | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Science & Space | Pre-deployment | No | Not high-impact | The purpose of this project is to improve tree disease diagnosis and screening, thereby facilitating ongoing efforts within and outside the Forest Service to manage diseases of forest trees. | The model will output a prediction indicating whether a tree is diseased or not, and if a tree is resistant or susceptible to a disease. | 08/03/2020 | Developed in-house | The model will output a prediction indicating whether a tree is diseased or not, and if a tree is resistant or susceptible to a disease. | None; | ||||||||||||||||||||
| Department Of Agriculture | Northern Research Station | USDA-072 | Use of LLMs for data extraction | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Mission-Enabling (Internal Agency Support) | Pre-deployment | The purpose of this model is to quickly gather information from scientific papers to track plant diseases. The practical benefit is that using this method would save time compared to manually collecting the information, which can be slow and error-prone when done for a long time. | The model outputs a table of requested data variables (e.g., country, pathogen name, host name, etc.). | 10/01/2023 | The model outputs a table of requested data variables (e.g., country, pathogen name, host name, etc.). | ||||||||||||||||||||||||
| Department Of Agriculture | Business Operations/Chief Data Office | USDA-073 | IPWG Application Survey Analysis | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | The purpose of this tool is to analyze over 5,000 responses from an internal employee survey on IT applications, then give a summary of the employee feedback regarding each IT application. This decreases the time required to go through each response manually, helping the team make informed investment decisions more quickly. | The model outputs a text summarization for each IT application in the survey data, and potentially includes text summaries of responses and sentiment analysis. The project will also produce a dashboard that allows users to see similar attributes at the agency, office, application, and individual response level. | 10/03/2024 | Developed in-house | The model outputs a text summarization for each IT application in the survey data, and potentially includes text summaries of responses and sentiment analysis. The project will also produce a dashboard that allows users to see similar attributes at the agency, office, application, and individual response level. | None; | ||||||||||||||||||||
| Department Of Agriculture | Rocky Mountain Research Station | USDA-074 | Fire Resilent Landscapes | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Science & Space | Pilot | No | Not high-impact | The goal of this tool is to quantify the cost of forest treatments. Benefits include providing the ability to accurately map treatment costs for users to make more informed decisions. | The tool outputs predictions in the form of raster surfaces/maps. | 08/01/2021 | Developed in-house | The tool outputs predictions in the form of raster surfaces/maps. | None; | ||||||||||||||||||||
| Department Of Agriculture | Rocky Mountain Research Station | USDA-075 | PC Rasterize | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Science & Space | Pilot | No | Not high-impact | The purpose of this tool is to be able to process point cloud data more efficiently. This will reduce costs associated with processing point cloud data. | The tool outputs point clouds and raster surfaces/maps. | 08/01/2024 | Developed in-house | The tool outputs point clouds and raster surfaces/maps. | None; | ||||||||||||||||||||
| Department Of Agriculture | Rocky Mountain Research Station | USDA-076 | Spread and Balance Sample Design | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Science & Space | Pilot | No | Not high-impact | The purpose of this tool is to produce samples that are well spread and balanced. This sample design will reduce the quantity of samples needed and further reduce costs associated with collecting field data. | The tool outputs data frames and geospatial-data-frames. | 05/01/2024 | Developed in-house | The tool outputs data frames and geospatial-data-frames. | None; | ||||||||||||||||||||
| Department Of Agriculture | Rocky Mountain Research Station | USDA-077 | Regression, Classification, Clustering with Hilbert Curves | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Science & Space | Pilot | No | Not high-impact | The purpose of this tool is to perform better regression, classification, and clustering. This will create a new and better ways to produce various estimates, reducing cost and error. | The tools will output Data frame and Raster Surfaces/maps. | 06/01/2024 | Developed in-house | The tools will output Data frame and Raster Surfaces/maps. | None; | ||||||||||||||||||||
| Department Of Agriculture | Research and Development | USDA-079 | The Big Data, Mapping, and Analytics Platform (BIGMAP) Project | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Science & Space | Deployed | No | Not high-impact | The purpose of this project is to use geospatial predictions from Forest Inventory and Analysis samples to make more accurate estimates of different forest characteristics. Greater precision in estimates leads to more informed decisions about the forest resources in the US. | The model outputs predictions in the form of raster maps. | 01/01/2019 | Developed with both contracting and in-house resources | The model outputs predictions in the form of raster maps. | None; | ||||||||||||||||||||
| Department Of Agriculture | Research and Development | USDA-080 | BirdNET to detect bird vocalizations for research and species monitoring | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Science & Space | Deployed | No | Not high-impact | BirdNET quickly scans thousands of hours of forest audio recordings to detect bird calls from species that are important for forest monitoring, like spotted owls, black-backed woodpeckers, and willow flycatchers. This decreases the time and cost associated with manually listening to recordings to identify bird calls. | The model outputs text files of bird calls, which include the bird species and time that the call was recorded. | 06/01/2021 | Developed in-house | The model outputs text files of bird calls, which include the bird species and time that the call was recorded. | None; | ||||||||||||||||||||
| Department Of Agriculture | Forest Inventory and Analysis; Southern Research Station | USDA-081 | Hurricane impact descriptions | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | The purpose of the AI model is to convert a table of data about a tropical cyclone's path and estimated impact on forests into a clear and understandable story. This is part of a rapid assessment given to stakeholders after a cyclone hits, so it needs to be done fast. We are creating a tool to automate this process and the AI helps to make better quality reports. | The model outputs a few paragraphs of easy-to-read text that explains the effects of a cyclone. | 07/01/2024 | Developed in-house | The model outputs a few paragraphs of easy-to-read text that explains the effects of a cyclone. | None; | ||||||||||||||||||||
| Department Of Agriculture | Southern Research Station-4353 | USDA-082 | Predictive flood modeling | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Transportation | Pre-deployment | The purpose of this tool is to predict water flow during floods and assesses the vulnerability of drains under roads. This will help U.S. Department of Transportation (USDOT) and USDA Forest Service to make informed decisions in drain restoration and protection against flooding. | The model outputs water flow predictions during flood events and the vulnerability level of drains under roads. | 10/01/2024 | The model outputs water flow predictions during flood events and the vulnerability level of drains under roads. | ||||||||||||||||||||||||
| Department Of Agriculture | Rocky Mountain Research Station | USDA-083 | FuelCast | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Emergency Management | Deployed | No | Not high-impact | This project predicts future fuel conditions and gives early warnings to help plan fuel management. The benefits include better preparation of the US firefighting teams for potential increases in large wildfires. This system also reduces the workload for fire behavior analysts because it provides fuel estimates, so they don't have to spend as much time figuring out fire behavior patterns through trial and error. | The model outputs predictions of the future quantity of wood and plants that could be present and contribute to wildfires. | Developed with both contracting and in-house resources | The model outputs predictions of the future quantity of wood and plants that could be present and contribute to wildfires. | None; | |||||||||||||||||||||
| Department Of Agriculture | Northern Region (R1) | USDA-084 | R1 Forest Vegetation Modeling | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Energy & the Environment | Pre-deployment | The purpose of this tool is to use satellite images and methods such as LiDAR (light detection and ranging) with machine learning to model forest vegetation and make estimates. The use of machine learning improves models and estimates with decreased time and cost. | The model outputs predictions of forests and vegetation in the form of raster and vector geospatial maps. | 01/01/2024 | The model outputs predictions of forests and vegetation in the form of raster and vector geospatial maps. | ||||||||||||||||||||||||
| Department Of Agriculture | Geospatial Office | USDA-085 | ESRI Support Chat Bot | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Science & Space | Pre-deployment | The purpose of this chatbot is to help handle requests for support with geospatial software between our team and the software vendor. The benefits include saving time when dealing with Environmental Systems Research Institute (ESRI) support issues and reducing the number of specific ESRI support tickets that need to be sent to the Forest Service Geospatial Helpdesk or to ESRI through contract support services. | The chatbot outputs support ticket entries, code snippets for queries, and text and links for support ideas and answers. | The chatbot outputs support ticket entries, code snippets for queries, and text and links for support ideas and answers. | |||||||||||||||||||||||||
| Department Of Agriculture | Southern Research Station | USDA-086 | Wildlife deterrent system | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Science & Space | Pilot | No | Not high-impact | The purpose of the AI device is to keep coyotes out of a fenced area by blocking their entry through a gap in the fence, while still allowing other wildlife to pass through. The benefits include making ecological research possible that couldn't be done otherwise, and saving time by reducing the need to watch camera footage and manually control the fence. | The model performs video object detection of a coyotes, and arms an electrical barrier to prevent passage of the coyote. | Developed with contracting resources | The model performs video object detection of a coyotes, and arms an electrical barrier to prevent passage of the coyote. | None; | |||||||||||||||||||||
| Department Of Agriculture | Pacific Southwest Research Station | USDA-087 | The Lost Meadows Model | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Energy & the Environment | Deployed | No | Not high-impact | The purpose of this model is to find out where meadows used to be and how often they appeared in order to understand their original state and their potential for restoration. The discovery of these areas increases the potential for meadow restoration, which can benefit biodiversity, wildfire management, carbon storage, and water storage. | The model outputs predictions of areas with meadow-like environmental conditions. The predicted areas include a mixture of existing but undocumented meadows, non-meadow lands that may have once been meadows, and meadow-like areas that may never have been a meadow. | 10/10/2022 | Developed in-house | The model outputs predictions of areas with meadow-like environmental conditions. The predicted areas include a mixture of existing but undocumented meadows, non-meadow lands that may have once been meadows, and meadow-like areas that may never have been a meadow. | None; | ||||||||||||||||||||
| Department Of Agriculture | Pacific Southwest Research Station | USDA-088 | Markov random fields for mixed forests | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Science & Space | Pre-deployment | No | Not high-impact | The purpose of this tool is to improve the accuracy of estimates in machine learning models. The benefits include helping stakeholders make more informed and effective decisions for managing mixed forests. | The model outputs predicted counts of tree species in a location, and the degree of competition between different tree species in the same location. | 10/01/2022 | Developed in-house | The model outputs predicted counts of tree species in a location, and the degree of competition between different tree species in the same location. | None; | ||||||||||||||||||||
| Department Of Agriculture | Pacific Northwest Research Station | USDA-089 | AI for regional forest mapping and monitoring | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Energy & the Environment | Deployed | No | Not high-impact | The purpose of this model is to use existing satellite images and forest survey data from the USDA Forest Service to create detailed maps of forest structures. This information will help land managers be more effective and efficient with their planning. | The model outputs GeoTiffs (raster maps of forest attributes, such as tree density and tree species data). | 01/01/2000 | Developed with contracting resources | The model outputs GeoTiffs (raster maps of forest attributes, such as tree density and tree species data). | None; | ||||||||||||||||||||
| Department Of Agriculture | WO Research and Development | USDA-090 | IOL Focus Group and Survey Sensemaking | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Education & Workforce | Pre-deployment | The purpose of this model is to efficiently process large quantities of focus group transcripts and survey results. Benefits include decreased labor hours manually processing transcripts and surveys. | The model outputs text summaries of focus group comments and surveys. | 06/11/2024 | The model outputs text summaries of focus group comments and surveys. | ||||||||||||||||||||||||
| Department Of Agriculture | Geographic Information System (GIS) Stakeholder Community - all deputy areas | USDA-091 | Esri ArcGIS Pro Deep Learning Modules | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | The purpose of this tool is to enhance scientific modeling and analysis, which will standardize Geographic Information System (GIS) workflows for modeling and analytics. | The tools will output image classifications. | 04/01/2024 | Developed in-house | The tools will output image classifications. | None; | ||||||||||||||||||||
| Department Of Agriculture | National Forest System; Ecosystem Management and Coordination | USDA-092 | EMC Comment Parsing and Analysis | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Government Services (includes Benefits and Service Delivery) | Pre-deployment | This project aims to extract, categorize, and respond to public comments based on past responses. The benefits include creating a standardized process for handling comments, making public comment data more accessible and ready for AI use, reducing the time and cost of processing comments, minimizing human errors due to high workloads and tight deadlines, improving responsiveness to public concerns, increasing public trust, enhancing accountability through clear reporting, and supporting team training by building a database of common themes and response strategies. | The model will output text analyses of categories pulled from public comments and recommend responses based on historic responses. | 03/01/2024 | The model will output text analyses of categories pulled from public comments and recommend responses based on historic responses. | ||||||||||||||||||||||||
| Department Of Agriculture | Northern Research Station | USDA-093 | QUIC-Fire processing and analysis | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Science & Space | Pre-deployment | AI is being used to analyze data from fire-atmosphere models to understand fire behavior and effects. The goal is to create tools that will help fire and smoke managers use the QUIC-Fire (Quick Urban & Industrial Complex-Fire) model for planning controlled burns and assessing wildfire behavior. | The AI output will be a collection of metrics that provide building blocks for a tool that fire and smoke managers will use to implement QUIC-Fire (Quick Urban & Industrial Complex - Fire) into their decision-making. | The AI output will be a collection of metrics that provide building blocks for a tool that fire and smoke managers will use to implement QUIC-Fire (Quick Urban & Industrial Complex - Fire) into their decision-making. | |||||||||||||||||||||||||
| Department Of Agriculture | Northern Research Station | USDA-094 | Analysis of prescribed fire turbulence data | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Science & Space | Pre-deployment | No | Not high-impact | The purpose of this project is to find connections between the heat from a wildfire and the turbulence it creates in the air. Current tools are not very accurate and can make mistakes. This AI effort helps create better tools that can assist fire and smoke managers in making decisions about smoke management. | The model outputs correlation analysis of how temperature change is associated with air turbulence measurements above a prescribed fire. | 06/12/2023 | Developed in-house | The model outputs correlation analysis of how temperature change is associated with air turbulence measurements above a prescribed fire. | None; | ||||||||||||||||||||
| Department Of Commerce | BEA | DOC-53 | GitHub Copilot for Code Modernization | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | BEA | DOC-1 | Meeting Transcription Summarization | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-5 | Real Time Classification for the Economic Census | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-56 | Automated Change Detection | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-57 | School staff information extraction from web page text | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-58 | Census Bureau Demographic Frame Person-Place Model | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-59 | Race and Ethnicity Autocoding | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-60 | Information extraction for web scraped data for Group Quarters frame enhancement | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-61 | Current Population Survey (CPS) Name Screening Tool | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-62 | Linkage and Matching Program (LaMP) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-63 | Census Research Exploration and Analysis Tool (CREAT) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-64 | Dr. NAICS LLM | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-65 | Automating Multilingual Census Data Processing: An AI and Transformer-Based Pipeline for Efficient Language Detection and Translation for Short-Text | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-66 | FAQ for SMaRT | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-67 | Statistical package syntax development and debugging | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-68 | DSD Python Code Translation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-69 | Census API GPT | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-70 | Natural Language Search for data.census.gov | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-71 | ACES CAPEX (structures, equipment, other) Machine Learning | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | FirstNet | DOC-2 | FirstNet Authority Network Program Management Data Analytics Tool and Service | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | FirstNet | DOC-74 | FirstNet Authority Communications Topaz Labs Photo Editing | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-75 | Global Business Navigator Chatbot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-76 | Generative AI Tools Pilot - Global Markets | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-77 | Generative AI Tools Pilot - Enterprise & Solutions Architecture | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-3 | ChatGPT Enterprise | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-4 | Google AgentSpace | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-6 | Amazon Web Services NLP, Classification, Text Mining | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-7 | Google Public Sector NLP, Classification, Text Mining | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-8 | Google Colab and VertexAI | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-9 | Anthropic - Claude For Government | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-16 | Community-based Messaging with LLM | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-17 | NWS Mutual Aid Coordination | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-18 | Draft Fronts for Surface Analysis | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-19 | Enhance LSR (Local Storm Report) Creation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-20 | NWS Public Safety Language Translation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-21 | Scientific Code Development Assistance | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-23 | Evaluate AI Models for Probabilistic Hurricane Predictions | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-24 | Evaluate AI for Forecasting Fronts | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-25 | AI-Driven Global Forecast Model and Ensemble | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-26 | AI-based Biased Correction and Downscaling for Weather Models | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-27 | Improving Accuracy of Physical-based Models | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-28 | Improve Background Error Modeling for JEDI | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-29 | Enhanced Precipitation Forecasting | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-30 | Ensemble Analysis to Identify Error Sources | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-31 | Enhanced Fire Weather, Aviation, and Storm Surge Forecast Guidance | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-32 | Enhanced Flood Risk and Impact Modeling | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-33 | Fisheries ESA Section 7 Biological Opinions and EFH Consultations | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-34 | Fisheries Global Seafood Data System (GSDS) Audit Support Application | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-35 | Optics Data Processing | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-36 | Electronic Monitoring | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-37 | Passive Acoustic Monitoring | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-38 | Active Acoustics | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-39 | OLE Looker | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-40 | Grants | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-41 | Streamlining Fisheries DevSecOps with Gemini Code Assist | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-42 | ENSO and Hurricane Outlooks using observed/analyzed fields | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-43 | Drought outlooks by using ML techniques | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-44 | NN/ML for OPC probabilistic guidance and GEFS-Waves | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-45 | ProbSR (probability of subfreezing roads | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-47 | AI/ML based atmospheric physics parameterizations for numerical weather prediction | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-48 | Detecting rip currents with coastal imagery | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-49 | AI QC of water level observations | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-50 | Flowcytobot imaging system data using ML | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-51 | HABScope | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-52 | IOOS Coastal Modeling Cloud Computing Sandbox | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-54 | Classifying communitity shifts with Self Organizing Maps | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-55 | Picky | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-72 | Structure from Motion photomosaic work in SE/Caribbean | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-73 | Utilizing Machine Learning for Coral Identification at Flower Garden Banks National Marine Sanctuary | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-81 | Improving the quality of NGS's GPS on Benchmarks with machine learning | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-22 | Coastal Change Analysis Program (C-CAP) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-83 | Supporting the Development of System Resilience Indicators for Wild Rice in Lake Superior, Lake Michigan, and Lake Huron | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-85 | Great Lakes Coastal Assembly Coastal Wetland Conservation Framework | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-87 | Automated Post-Disaster Vessel and Debris Mapping from Remotely Sensed Imagery | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-89 | Machine Learning Collaboration Yields New Methods to Measure Shoreline Marine Debris | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-90 | Mussel Watch data management | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-91 | Ice seal detection and species classification in multispectral aerial imagery | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-92 | Edge AI survey payload development | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-93 | Steller sea lion automated count program | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-84 | Steller sea lion brand sighting | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-94 | Use of an Imaging Flow Cytobot for identification of phytoplankton and HABs in Alaska's Large Marine Ecosystems | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-95 | Automated classification of zooplankton images | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-96 | Acoustic and image-based habitat classification in the Gulf of Alaska using machine learning | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-98 | Automated detection and abundance estimation of salmon and pollock in Alaska's walleye pollock fishery | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-100 | Predicting annual market squid returns using machine learning methods | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-101 | AI-based automation of acoustic detection of marine mammals | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-102 | Passive acoustic analysis using ML in Cook Inlet, AK | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-103 | Automated matching of identification photographs of Cook Inlet beluga whales | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-104 | Automate detection of marine mammals and birds in still images | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-105 | Automated detections of fish and invertebrates in Habcam images | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-106 | Deep learning algorithms to automate right whale photo id | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-107 | Ropeless Geolocation System | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-108 | Capitalizing on a groundfish image library to test automated image classification in the northeast region. | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-109 | Geospatial Artificial Intelligence for Animals (GAIA) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-110 | Robotic microscopes and machine learning algorithms remotely and autonomously track lower trophic levels for improved ecosystem monitoring and assessment | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-111 | Advancing sustainable shellfish aquaculture through machine learning and automated data collection on fish communities | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-112 | Integrating AI into AUV image analysis workflow | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-113 | Development of large annotated image data sets for training detection of groundfish and benthic invertebrates | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-114 | Using CoralNet to develop subtrate detection models for AUV imagery | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-115 | AI and Machine Learning for end-to-end marine ecosystem model calibration and validation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-116 | Climate Change Impacts on the California Current Marine Ecosystem | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-117 | Puget Sound Climate Impacts on Orcas and Salmon | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-118 | Automating Anadromous Fish Counts using imaging sonar data | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-119 | VIAME: Video and Image Analysis for the Marine Environment SoftwareToolkit | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-120 | Artificial Fintelligence: Automating photo-ID of dolphins in the Pacific Islands | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-121 | Machine learning to automate review of electronic monitoring data collected from the Hawaii Longline Fisheries | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-122 | An Interactive Machine Learning Toolkit for Classifying Species Identity of Cetacean Echolocation Signals in Passive Acoustic Recordings | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-123 | Advancing the use of technology for port sampling in the US Caribbean using image analysis for length composition | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-124 | Fast tracking the use of VIAME for automated identification of reef fish | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-125 | Automation detection of sea turtles from Uncrewed Aircraft System (UAS) Surveys | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-126 | AI for automated Rice's whale call detections and soundscape sources | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-127 | Using FinFindR (computer-assisted identification of dorsal fins) for automation of photo processing. | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-128 | Developing automation in the shrimp fisheries electronic monitoring. | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-129 | Developing image library for EM collected data to ID Protected Species Bycatch. | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-130 | Developing automation in the Reefish fisheries electronic monitoring. | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-131 | Automation and detection of Marine Mammals and Turtles from AUV collected imagery. | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-132 | Integrating query learning and domain adaptation to develop robust ML algrithms to determine species and count using optical data gathered from fisheries dependent and fisheries independent data collected in the Gulf of Mexico. | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-133 | Developing deep learning models to automate age determination | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-134 | Using community-sourced underwater photography and image recognition software to study green sea turtle distribution and ecology in southern California | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-135 | Searching for large whales in UAS photographic strip transect images: developing an AI/ML object detection model using aerial photogrammetry catalogues | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-136 | Automated whale blow detections using IR cameras | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-137 | Partially automated matching of gray whales in lateral photo identification images | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-86 | BANTER, a machine learning acoustic event classifier | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-138 | California sea lion, Steller sea lion, and northern fur seal automated count program in the California current | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-139 | Uncertainties and recommendations for projecting species distributions under climate change | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-140 | Quantifying the spatiotemporal overlap of albacore with diverse fisheries and IUU risk factors in the North Pacific | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-141 | Advancing the West Coast Ocean Forecasting System through Assessment, Model Development, and Ecological Products | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-142 | Dynamic prediction system for illegal, unregulated, and unreported fishing | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-143 | Where did they not go? Considerations for generating pseudo-absences for telemetry-based habitat models | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-144 | Predictability of Species Distributions Deteriorates Under Novel Environmental Conditions in the California Current System | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-145 | Denoising Citizen Science Big-Data - Empowering Magnetic Navigation with Machine Learning | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-146 | SUVI Thematic Maps | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-147 | Use of AI/ML CNN for VIIRS cloud clearing and super resolution | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-88 | LightningCast: A lightning nowcasting model | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-148 | The Development of ProbSevere v3 - An improved nowcasting model in support of severe weather warning operations | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-149 | The VOLcanic Cloud Analysis Toolkit (VOLCAT): An application system for detecting, tracking, characterizing, and forecasting hazardous volcanic events | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-150 | Automated detection of hazardous low clouds in support of safe and efficient transportation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-151 | The Next Generation Fire System (NGFS): Automated human expert-like detection of fires in satellite imagery | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-152 | Nowcasting Extreme Fire Behavior | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-153 | Leveraging Machine Learning to Enhance the Quality of Ocean Observations | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-154 | Work with Allen Institute on developing the Ai2 Climate Emulator (ACE) for seamless weather applications, including to emulate SHiELD-based medium-range forecasts for large ensemble predictions | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-155 | Work with NMFS and NOS using AI/ML to understand how fish habitats are shaped by ocean conditions, and how changes in conditions might impact fisheries distributions and productivity. | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-156 | AI/ML techniques to infer local climate conditions based on large-scale climate drivers (i.e., empirical-statistical downscaling) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-157 | Use of AI/ML techniques to understand the factors controlling coastal hypoxia and its predictabilit | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-158 | A Hybrid Data-driven and Physics-based Framework for Atmospheric Radiative Transfer Modeling | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-159 | Detection of hardware issues in complex, wide-area computing systems based on non-intrusive workflow performance data gathered via the GFDL EPMT | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-160 | AI based Precipitation estimation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-161 | Optimization of highly-concurrent workflow task management systems based on anomaly detection using non-intrusive workflow performance data gathered via the GFDL EPMT | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-162 | Weather.gov 2.0 Rebuild and NWS API Improvement Efforts | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-163 | Utilizing Neural Operator Deep Learning to Enhance National-Scale Coastal Ocean Predictions | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-164 | Physics-Informed Neural Network Hurricane Vortex Reconstruction | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-165 | A hybrid physics-machine learning model for orographic precipitation forecasting | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-166 | Observation-Centric Estimation and Learning for Outlook Trajectories (OCELOT) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-167 | Severe Convective Weather Parameter Generation using AI/ML with Microwave/Infrared Sounder Satellite Observations for Enhanced Weather Analysis and Forecast | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-168 | AI Based Calibration/validation of satellite microwave sounder observations for Numerical Weather Prediction (NWP) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-169 | AI/ML Enterprise Cloud Mask development and operational implementation for all NOAA and international partner sensors | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-170 | AI-powered Chatbot for Federal Funding Assistance (Grants) Guidance Dissemination | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-171 | Essential Fish Habitat Consultation Efficiency Increases and Template Creation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-172 | Generative AI for Biological Opinions | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-173 | Administrative Tools - (Such as meeting/document management, reasonable accommodation needs, other administrative efficiencies such as broad code generation/translation/optimizations) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-174 | CED Generative AI Pilot Program | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-175 | GitHub CoPilot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-176 | Document existing functions | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-177 | LLM-Generated Causal Models | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-178 | SWFSC Publications Search | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-179 | Using ChatGPT4 /DALLE3, Adobe Sensei and similar for design ideation and image generation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-180 | Coastal Zone Management Act Section 312 Evaluations AI Pilot Project | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-181 | Support Chat Bot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-182 | A Study to Determine Natural Language Processing (NLP) Capabilities with the NCCF Open Knowledge Mesh | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-183 | Developing Access Capabilities for the NCCF Open Information Stewardship Service (OISS) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-184 | Digital Twin for Earth Observations Using Artificial Intelligence | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-185 | Improving Imagery Visualization using Limb-Correction and AI Resolution Enhancement for Microwave sensors | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-186 | Super-resolution of Satellite Imagery Products using Generative AI | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-187 | Ocean AI | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-188 | AI based radiative transfer emulator for data assimilation and remote sensing | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-189 | Integrating NOAA APIs with LLMs for Enhanced Access to Environmental Data and Insights | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-190 | AI Pair Programming with GitHub CoPilot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-191 | Gemini (Previously Duet AI) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-192 | AI-Driven Predictive Maintenance for the NOAA Fleet | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-193 | AI-Enhanced Emergency Response and Mission Continuity Planning | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-194 | Emissions Reduction for NOAA Fleet | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-195 | Fleet Requirements Analysis and Management Engine (FRAME) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-196 | Assissted translation of code between Matlab, R, and Python | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-197 | Storm Events Knowledge Graph Chatbot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-198 | Alma/Primo | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-199 | Amazon Q Developer Pilot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-200 | APIgee with Gemini Code Assist for OpenAPI Development | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NIST | DOC-11 | Science Data Portal Autosuggest Search | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NIST | DOC-82 | Grammarly | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NIST | DOC-10 | LLM support for NIST research (Azure) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NIST | DOC-12 | Library Market Research GenAI Tools Pilot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NIST | DOC-13 | Google Gemini | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NIST | DOC-14 | LLM support for NIST research (NIST HPC) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NIST | DOC-15 | LLM support for NIST research (Google Vertex) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NTIA | DOC-46 | WAWENETS | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NTIA | DOC-78 | Streamline Spectrum Activities | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NTIA | DOC-79 | Spectrum Visualizations | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NTIA | DOC-80 | Use of Lexis. Other searches for legal research | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NTIA | DOC-201 | Grants Program Administration - Inquiry management Chatbot Pilot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NTIA | DOC-202 | Grants Program Administration - BEAD Monitoring Plan Agent Pilot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NTIA | DOC-203 | Grants Program Administration - Tiered Environmental Assessment AI Pilot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NTIA | DOC-204 | NTIA Grants Portal (NGP) - AI Summarization, Analytics, and Automations | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | OS | DOC-205 | Using AI to assess the AI-readiness of Commerce data | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | OS | DOC-206 | BAS Assist | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | OS | DOC-207 | DOC Chat | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | OS | DOC-208 | USAi.gov | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | OS | DOC-209 | PRISM BidScale | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | OS | DOC-210 | Implement MS365 Copilot in OS | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-211 | Prior Art Search: AI Retrieval for Patent Search (PSAI) (Similarity Search) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-212 | Pre-Exam Application: CPC Classification | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-213 | TM Word and Image Search Tool (TWIST) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-214 | Prior Art Search: Patent Image Search | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-97 | Virtual Assistant (Public) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-215 | Pre-Exam Application: Front End Document Code Quality Control | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-99 | Pre-Exam Application: Skill Group Matching | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-216 | GenAI platform and applications for general productivity | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-217 | Prior Art Search: Automated Search AI Pilot (ASAP! Report) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-218 | First Office Action Creation: Claim Comparision for Double Patenting | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-219 | First Office Action Creation: Examination Analysis Determination/Analysis of Informalities (35 USC 101 and 112) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-220 | Patent Fraud Detection & Mitigation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-221 | Assisted software development | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-222 | Call Center Automations (Internal) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-223 | Pre-Exam Application Processing: Trademark Center (TM Center) AI Automation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Energy | ANL - Argonne National Laboratory (SC43 OIM) | DOE-10 | AI for Operations Center | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Impact of system failure or erroneous results is negligible. | Natural Language Processing | Reduce human effort and increase efficiency in identifying Lessons Learned to work planning. | Increase awareness and use of Lessons Learned. | Recommended Lessons Learned documents related to proposed and ongoing Work Projects. | 01/01/2022 | Developed in house | Yes | Recommended Lessons Learned documents related to proposed and ongoing Work Projects. | ANL Operational data. | No. | No | Yes | Not applicable | Yes | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Direct usability testing | |||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-118 | Natural Language Processing | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Developed natural language processing (NLP) algorithms will be used to help categorize and understand various energy storage efforts in the R&D communities. Additionally, trends within the discovered and selected topical focus areas in energy storage | Categorize and understand energy storage efforts | Data | Data | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-120 | DOE AI Data Infrastructure System | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Leveraging generative AI and cloud-enabled data infrastructure to improve carbon capture and storage user experience | Improve connectivity and create adaptive user interface | User interface/data | User interface/data | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-125 | Creation of polymer datasets and inverse design of polymers with targeted backbones having High CO2 permeability and high CO2/N2 selectivity. | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Creation of polymer datasets and inverse design of polymers with targeted backbones having High CO2 permeability and high CO2/N2 selectivity. | Predict permeability and selectivity of polymers | Data | Data | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-150 | To use AI to calibrate the simulation model by matching simulation data with production history data. | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | To use AI to calibrate the simulation model by matching simulation data with production history data. | Calibrate simulation models | Data | Data | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-179 | Data discovery, processing, and generation using machine learning for a range of CCS data and information | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Data discovery, processing, and generation using machine learning for a range of carbon capture and storage data and information | Data compression, clustering, mapping | Data | Data | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-195 | Machine Learning for geophysical data inversion | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | R&D use case that is NOT being used to control or significantly influence a decision or outcome about individuals and does not have an approved agreement for transition into agency operations. | Classical/Predictive Machine Learning | Leak detection. | Faster/better leak detection. | Synthetic seismic and gravity data. | 26/09/2025 | Developed in house | No | Synthetic seismic and gravity data. | Seismic and gravity data, potentially geological models and leak locations for training labels. | If disclosible, data is made accessible through https://edx.netl.doe.gov/ | No | None of the above | Yes | If disclosible, data is made accessible through https://edx.netl.doe.gov/ | |||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-218 | To help automate data discovery and preparations to support a range of CS models, tools, and products | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Natural Language Processing | To help automate data discovery and preparations to support a range of CS models, tools, and products | Automate data dicovery | Data | Data | ||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-328 | ORNL: Foundational AI Research | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Generative AI | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-329 | ORNL: AI for Materials Science | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Generative AI | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-330 | ORNL: AI for Experimental Facilities Operations | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Agentic AI | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-331 | ORNL: AI for Transportation and Mobility | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-333 | ORNL: AI for Energy Generation and Distribution | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-334 | ORNL: AI for Advanced Manufacturing | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | improve manufacturing | improve manufacturing | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-336 | ORNL: AI for Bio and Health Sciences | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Generative AI | improve health | improve health | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-337 | ORNL: AI for the Smart Laboratory | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Agentic AI | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-338 | ORNL: AI for Neutron Science | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-339 | ORNL: AI for Earth Systems | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | a) High-impact | High-impact | Generative AI | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-340 | ORNL: AI for Fusion Energy | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | IM-60 - IM Enterprise Operations and Shared Services (IM) | DOE-347 | Network Security and Analysis | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | It does not meet the requirements to be high impact | Classical/Predictive Machine Learning | behavior of attack, triage | Vectra AI is a Structured and Unstructured machine learning (ML) and Security -Led Artifical Intelligence (AI) tool used to detect patterns, anomalous or previously unseen activities inside petabytes of network and log data within the DOE HQ EITS networking boundary and cloud environments. | Prediction: The Vectra AI Platform with Attack Signal Intelligence uses AI to analyze the behavior of attackers, automatically apply triage, correlate, and prioritize each security event or incident. | 05/12/2019 | Purchased from a vendor | Vectra | Yes | Prediction: The Vectra AI Platform with Attack Signal Intelligence uses AI to analyze the behavior of attackers, automatically apply triage, correlate, and prioritize each security event or incident. | Network Data flow and system logs | No | No | No | Vendor owned - code not available. | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-349 | Advancing Market-Ready Building Energy Management by Cost-Effective Differentiable Predictive Control | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Reinforcement Learning | Enhance building energy management with predictive control, safety verification, and optimization. | Lowers building energy costs while ensuring safe, resilient operations. | Lowers building energy costs while ensuring safe, resilient operations. | 01/10/2023 | Developed in house | Yes | Lowers building energy costs while ensuring safe, resilient operations. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-350 | Adaptive Cyber-Physical Resilience for Building Control Systems | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Reinforcement Learning | How to maintain efficient, reliable, and secure operation of building control systems in the face of disruptions, changing conditions, or cyber-physical threats. | Improves drug safety and reduces adverse health impacts in vulnerable populations. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | PassiveLogic | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-351 | Elucidating Genetic and Environmental Risk Factors for Antipsychotic-induced Metabolic Adverse Effects Using AI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Classical/Predictive Machine Learning | Identifying individuals at higher risk for adverse metabolic effects from antipsychotic medications through predictive modeling of genetic and environmental data. | Improves drug safety and reduces adverse health impacts in vulnerable populations. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-352 | APT Analytics | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Classical/Predictive Machine Learning | Automate analysis of atom probe tomography (APT) data for faster scientific insights. | Speeds up materials research through automated nanoscale data analysis. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | Microsoft | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-353 | AI used for predictive modeling and real time control of traffic systems | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Reinforcement Learning | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | Reduces traffic congestion, energy use, and greenhouse gas emissions. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | Microsoft | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-354 | Laboratory Automation | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Computer Vision | Automate SEM/TEM data acquisition by identifying regions of interest with machine learning. | Increases efficiency and throughput of scientific imaging and analysis. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-355 | Scalable, Efficient and Accelerated Causal Reasoning Operators, Graphs and Spikes for Earth and Embedded Systems (SEA-CROGS) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Natural Language Processing | Improving Information Access, Understanding, and Productivity through Language Automation | Accelerates scientific discovery and next-gen computing for earth and embedded systems. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | Microsoft | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-356 | Physics-Informed Learning Machines for Multiscale and Multiphysics Problems (PhILMs) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Classical/Predictive Machine Learning | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | Promotes sustainable, economically viable waste-to-energy transitions. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-357 | Managing curb allocation in cities | Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Classical/Predictive Machine Learning | Manage curb space dynamically in cities to address rising demand for curbside parking. | Improves urban mobility and equitable access to curb space. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-358 | Regional waste feedstock conversion to biofuels | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Classical/Predictive Machine Learning | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | Promotes sustainable, economically viable waste-to-energy transitions. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-359 | Use of developed ML techniques to parse opensource text-based information. | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Natural Language Processing | Parse open-source text to define disadvantaged communities for energy transition planning. | Informs equitable energy transition policies for disadvantaged communities. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | Microsoft | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-360 | AI techniques for identification of suitable delivery parking spaces in an urban scenario | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Classical/Predictive Machine Learning | Identify optimal urban delivery parking spaces to support EV freight adoption. | Supports sustainable freight delivery and electric vehicle adoption. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | Cisco | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-361 | Surrogate models for probabilistic Bayesian inference | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Classical/Predictive Machine Learning | Estimate unknown model parameters using surrogate models for probabilistic Bayesian inference. | Enables faster, more reliable insights from complex physical models. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | SLAC - SLAC National Accelerator Laboratory (SC43 OIM) | DOE-410 | AI for system design optimization (e.g., detector, accelerator) | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | This AI system does not meet the criteria of any of the six pillars that make up High Impact AI in Memorandum M-25-21 | ||||||||||||||||||||||||||
| Department Of Energy | WAPA - Western Area Power Administration (PMA) | DOE-425 | FIMS Invoice BOT | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Employee Reimbursements and Purchase Power processes | Employee Reimbursements and Purchase Power processes | Employee Reimbursements and Purchase Power processes | |||||||||||||||||||||||
| Department Of Energy | PM HQ - Office of Project Management (PM) | DOE-426 | PARSGPT | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | AI is used for Project Management, internal investigations and audits. | Generative AI | Project Management internal investigations and audits | The value-add is derived from providing an accessible way for PM Analysts to safely interact with LLM technology. | Free form text response to questions (Chatbot) | 22/05/2025 | Developed in house | Yes | Free form text response to questions (Chatbot) | Data is not required to be reported. | No | Yes | Code is not open source. | |||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-427 | AI Incubator Sandbox | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Agentic AI | Provide a secure, multimodal AI chatbot sandbox for experimentation without internet access. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | Yes | ||||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-430 | ServiceNow Predictive Intelligence | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | General purpose business system | Generative AI | Improve helpdesk efficiency and data quality | Provide better and more consistent classification of ticket data entered into ServiceNow | Field classification data | 01/01/2024 | Purchased from a vendor | ServiceNow (SAAS hosting provider) | Yes | Field classification data | Existing ticket data is used to train the model with data and training servers stored within FedRAMP High data centers where ServiceNow is hosted | No | None of the above | No | Yes | Positive impact on laboratory cost/time efficency for helpdesk staff | Yes – by an agency AI oversight board not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-431 | AI-Enhanced Lab Assist | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Detect trends in lab planning/control data to improve efficiency and knowledge sharing. | Integrating lessons learned into Lab Assist Activity Planning to enhance operational efficiency, improve information sharing leveraging best practices, and foster a culture of continuous improvement. | Leverage AI for trend detection in working planning and control data. | 01/10/2024 | Developed in house | Yes | Leverage AI for trend detection in working planning and control data. | No | No | No | None of the above | Yes | ||||||||||||
| Department Of Energy | SWPA - Southwestern Power Administration (PMA) | DOE-432 | SWPA Generative AI | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | This was a testing instance and does not meet the defition of High Impact. This AI instance was not trained on any production data. | No | No agency data is used for training | No | No | ||||||||||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-433 | Microsoft 365 Copilot (Productivity Suite) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Automate and streamline productivity tasks in Microsoft 365 apps for staff efficiency. | This helps PNNL streamline workflows, improve efficiency, and allow researchers to focus more on innovation and less on administrative tasks, ultimately accelerating scientific research and operational effectiveness. | Using Microsoft 365 Copilot, PNNL aims to produce enhanced document quality, increased efficiency, insightful data analysis, improved collaboration, automated workflows. | 01/10/2024 | Purchased from a vendor | Microsoft | Yes | Using Microsoft 365 Copilot, PNNL aims to produce enhanced document quality, increased efficiency, insightful data analysis, improved collaboration, automated workflows. | No | No | No | None of the above | Yes | |||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-434 | NLCOO AI for Lessons Learned tool | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | General purpose business system | Natural Language Processing | Provide best Lessons Learned based on the Problem a user is trying to address. | Better search to gain insights from exisitng lessons learned to improve how we do work. | Search list of relevant documents | 01/01/2025 | Developed in house | No Vendor Involved | Yes | Search list of relevant documents | Not trained on any agency or LANL data | No | None of the above | Yes | Yes | This app has made it very easy to identify lessons learned from across the enterprise. | Agency CAIO has waived this minimum practice and reported such waiver to OMB | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | General solicitations of comments from the public | ||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-435 | Spot for automated sensing, inspection, and capture | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Computer Vision | Conduct automated sensing, inspection, and data capture in challenging environments with robots. | Spot, a mobile robot, can navigate hazardous or hard-to-reach areas to perform inspections and gather precise data with its advanced sensors and cameras. | Improving efficiency, and the accuracy of sensor research data. | 01/10/2024 | Developed with both contracting and in-house resources | Boston Dynamics | Yes | Improving efficiency, and the accuracy of sensor research data. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-436 | Microsoft Power Platform capability that provides AI to automate processes in Power Apps and Power Automate | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Automate workflows and processes within Microsoft Power Apps and Power Automate. | By leveraging AI for automation, PNNL can automate routine tasks such as data entry, reporting, and workflow management, freeing up researchers and staff to focus on higher-value activities. | Enhances productivity, reduces human error, and leads to more efficient management of research projects and resources. | 01/10/2024 | Purchased from a vendor | Microsoft | Yes | Enhances productivity, reduces human error, and leads to more efficient management of research projects and resources. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | HC HQ - Office of the Chief Human Capital Officer (HC) | DOE-437 | CAISY | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | CAISY is not designated HIGH IMPACT in reards to definitions indicated in OMB MEMO M-25-21 | Reinforcement Learning | The feature's main objective is to mimic real-life situations in the workplace using predefined scenarios | 1. CAISY provides interactive, scenario-based content powered by AI where a learner can practice new skills in a safe space. This AI simulation content allows learners to choose a role, practice specific skills by responding to AI prompts, and receive adaptive, personalized feedback to guide their development 2. The user is introduced by an avatar that is generated with AI text-to-video. After the introduction, the learner can either interact by typing or using speech-to-text (STT) and text-to-speech (TTS) services for more immersive and natural interaction. When the conversation is over, the learner will receive a rating and evaluation | speech-to-text (STT) and text-to-speech (TTS) services for more immersive and natural interaction. | 21/08/2024 | Purchased from a vendor | SKILLSOFT | Yes | speech-to-text (STT) and text-to-speech (TTS) services for more immersive and natural interaction. | None of the above | No | N>A | No | N>A | |||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-438 | ServiceNow Virtual Agent | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | ServiceNow's AI output is NOT a principal driver of a government decision that would meaningfully affects people's rights, safety, critical access, or strategic assets. AI in ServiceNow is strictly used to help manage IT resources for LANL. | Natural Language Processing | Provide chatbot services to help customers resolve issues or open service requests that do not require human intervention | Improved automation | Recommendation | 01/11/2024 | Purchased from a vendor | ServiceNow | Yes | Recommendation | Existing ticket data is used to train the model with data and training servers stored within FedRAMP High data centers where ServiceNow is hosted | No | None of the above | No | Yes | Minimal impact as AI is used purely for predictive purposes to aid trained technicians. Humans remain the decision makers whether to use the AI's output. | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Yes, an appropriate appeal process has been established | Other | ||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-439 | Databricks AI for Cloud Data Warehouse | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Streamline AI/ML solution building and governance in a unified cloud data warehouse. | Efficiency in analytics and deployment of AL/ML models | Recommendation based on analytic input | 01/02/2025 | Purchased from a vendor | Databricks | Yes | Recommendation based on analytic input | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | EHSS HQ - Office of Environment Health Safety and Security (EHSS) | DOE-440 | DOE Technical Standards | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | The use of AI in this office does not serve as the principal basis for decisions or actions that have legal, material, binding, or significant effect on rights or safety. Its use is to enhance quality and technical accuracy | Generative AI | Aid employees in improving the quality and technical accuracy of their work products. | Enhance quality and accuracy of technical standards, supporting DOE's commitment to safety excellence. | Recommendations and feedback for improvement | 02/10/2025 | Developed in house | EnerGPT | No | Recommendations and feedback for improvement | None. | No | No | None. | The impacts are that the inputs feeding the answers generated by AI are not accurate and, therefore, lead to inaccurate output. For this reason, all use of AI by EHSS-11 is thoroughly vetted and checked by employees prior to consideration for use. | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-441 | Microsoft Copilot for Security | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Strengthen cybersecurity with AI-driven threat detection, response, and vulnerability management. | Proactively identify, mitigate, and respond to threats. Copilot assists in real-time monitoring, threat detection, incident response automation, and vulnerability management. | The intended outputs of using Microsoft Copilot for Security at PNNL include real-time threat detection, automated incident response, enhanced data protection, compliance reports, security insights, reduced cyber risk, and sustained operational continuity | 01/10/2024 | Developed with both contracting and in-house resources | Microsoft | Yes | The intended outputs of using Microsoft Copilot for Security at PNNL include real-time threat detection, automated incident response, enhanced data protection, compliance reports, security insights, reduced cyber risk, and sustained operational continuity | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | KCNSC - Kansas City National Security Campus (KCFO) | DOE-442 | Text To Speech Audio Generation | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Does not match any of the identifed categories in M-25-21. Is used to generate audio in courses that do not have a direct impact to safety/data security. | Generative AI | Inefficiencies with generating audio for training materials that does not include CUI, UCNI or Class material. Allows for quick generation and updates to course and video audio | Reduction of time to create and modify training courses to ensure qualification of employees | MP3 files incoorperated into videos/courses | 16/09/2022 | Purchased from a vendor | Wellsaid | Yes | MP3 files incoorperated into videos/courses | None, we do not train the model | No | None of the above | No | ||||||||||||
| Department Of Energy | NREL - National Renewable Energy Laboratory (EE) | DOE-443 | Lex Natural Language Interface | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | This AI use case is strictly used for general purpose generative AI functionality. It is internal only, and no data is shared outside of NREL. | Generative AI | Find insights in a database with a LLM summarizing the results | Create an efficient and user-friendly system that enables users to query project data, such as funding, AUs (allocation units), project focus, and fiscal years, with natural language prompts. | The system executes a query against a postgres database and displays an LLM-generated textual summary of returned query records. The system also displays the LLM generated queries. | 01/06/2025 | Developed in house | Microsoft | Yes | The system executes a query against a postgres database and displays an LLM-generated textual summary of returned query records. The system also displays the LLM generated queries. | Structured data from a PostgreSQL database containing information about HPC (High-Performance Computing) projects. | na | No | Yes | ||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-444 | Copilot Studio | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | a) High-impact | High-impact | Doesn't meet criteria. | Generative AI | Enhancing day to day processes. | 1) Enhancing employee productivity and efficiency. | Customization Low-Code Development GPT-Based Capabilities Analytics Entities and Variables | 11/03/2024 | Purchased from a vendor | Microsoft | Yes | Customization Low-Code Development GPT-Based Capabilities Analytics Entities and Variables | no | Yes | No | na | In-Progress | None Identified | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Development of monitoring protocols is in-progress | Yes, sufficient and periodic training has been established | Yes | Not applicable | Other | ||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-446 | Scopus AI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Used for general purpose generative AI within a COTS product. | Other | Not available | Assist researchers by reducing time to find applicable research while increasing quality and accuracy of identified hits. | Research citations, abstracts and other summaries. | Developed with both contracting and in-house resources | Not available | No | Research citations, abstracts and other summaries. | None. Scopus AI uses publicly available journal abstracts, no agency data is used. | Not available | No | No | Not available | ||||||||||||
| Department Of Energy | IM-50 - Architecture Engineering Technology and Innovation (IM) | DOE-449 | EnerGPT | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | It does not meet the definition defined by OMB. | Generative AI | Improves efficiency of DOE staff. | EnerGPT aims to enhance user productivity and reduce time spent on redundant tasks . | EnerGPT generates answers to user questions. | 02/09/2024 | Developed with both contracting and in-house resources | Accenture Federal Services, Google | Yes | EnerGPT generates answers to user questions. | Google's Gemini familiy of models. | No | None of the above | No | ||||||||||||
| Department Of Energy | EHSS HQ - Office of Environment Health Safety and Security (EHSS) | DOE-450 | MAPPRITE | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | MAPPRITE's AI does not provide outputs that serve as a principal basis for decisions or actions with legal, material, binding, or significant effects. | Generative AI | (1) The implemented AI will help automate data mining, ingesting, and indexing of existing disparate organizational data sources information for relevant safeguards and security (S&S) support information; and the (2) expected benefits will be to help | (1) The implemented AI will help automate data mining, ingesting, and indexing of existing disparate organizational data sources information for relevant safeguards and security (S&S) support information; and the (2) expected benefits will be to help improve EHSS-51 business workflows for researching potentially relevant S&S support data available such that the information will be accessible and searchable by policy subject matter specialists for awareness and additional context for strategic decision-making and policy management | In its full implementation phase, the application's AI output will provide S&S policy [support] data available such that the information will be accessible and searchable by policy subject matter specialists for strategic decision-making and policy management without having to manually search through hundreds of sources for relevant information. | 23/11/2025 | Developed with both contracting and in-house resources | Special Technologies Laboratory (STL) | Yes | In its full implementation phase, the application's AI output will provide S&S policy [support] data available such that the information will be accessible and searchable by policy subject matter specialists for strategic decision-making and policy management without having to manually search through hundreds of sources for relevant information. | Department of Energy Directives. Requirement source documents, such as statutes, regulations and standards were also provided to the development team to assist with ingesting content to the AI model via AWS Kendra. | Yes | Yes | |||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-451 | ServiceNow AI Search | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Doesn't meet the criteria | Generative AI | Intelligent query features help you quickly find the answers ServiceNow users need. | 1) Enhancing employee productivity and efficiency. | AI Search includes search features that help users find the answers they need. Query for indexed terms and phrases. Control query logic with Boolean operators. Match a range of indexed terms using wildcard operators. AI Search provides users with clear answers for their search queries. | Yes | AI Search includes search features that help users find the answers they need. Query for indexed terms and phrases. Control query logic with Boolean operators. Match a range of indexed terms using wildcard operators. AI Search provides users with clear answers for their search queries. | No | No | |||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-452 | Copilot for Microsoft 365 | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | a) High-impact | High-impact | Generative AI | Enhancing day to day processes. | 1) Enhancing employee productivity and efficiency. | Smart documentation Creation Efficient Meeting Management Data Insights and Analysis Security and Compliance | 11/03/2024 | Purchased from a vendor | Microsoft | Yes | Smart documentation Creation Efficient Meeting Management Data Insights and Analysis Security and Compliance | Microsoft 365 data | not applicable | Yes | No | not applicable | In-Progress | None Identified | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Development of monitoring protocols is in-progress | Yes, sufficient and periodic training has been established | Yes | Not applicable | Other | ||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-453 | MI8 Collimators Surogate Model | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project is working to create a ML surogate model of the exisitng MI8 collimation system. The purpose of the ML model is to aid in the tuning of the collimation system for acclerator operations and to help find more optimal settings in a timely m | This project is working to create a ML surogate model of the exisitng MI8 collimation system. The purpose of the ML model is to aid in the tuning of the collimation system for acclerator operations and to help find more optimal settings in a timely manner. We hope to extend these techniques to other sub-systems and also a new MI8 collimation system being installed. | The ML outputs of the system are predictions of collimation system performance given collimation system settings and beam charecteristics. | 25/09/2025 | Developed in house | No | The ML outputs of the system are predictions of collimation system performance given collimation system settings and beam charecteristics. | Accelerator operations machine data | No | Yes | unknown | |||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-454 | AI for High Risk Property | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | a) High-impact | High-impact | Generative AI | 1) Regulatory compliance. 2) Improved identification of high-risk property items and increased productivity. | 1) Regulatory compliance. 2) Improved identification of high-risk property items and increased productivity. | Decision for high-risk property categorization | Yes | Decision for high-risk property categorization | No | Yes | ||||||||||||||||||
| Department Of Energy | SEPA - Southeastern Power Administration (PMA) | DOE-455 | Records Digitization | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | As part of the records digitization process, SEPA is leveraging AI and ML to enhance metadata tagging and quality. | Natural Language Processing | Enhance lookup of agency records and electronic documents | SEPA is leveraging AI and ML to enhance metadata tagging and quality control during the records digitization process. | Acurate metadata assignment in accordance with SEPA's NARA-approved file plan/records schedule. | Acurate metadata assignment in accordance with SEPA's NARA-approved file plan/records schedule. | ||||||||||||||||||||
| Department Of Energy | SLAC - SLAC National Accelerator Laboratory (SC43 OIM) | DOE-456 | Machine Learning components within Splunk Enterprise Security | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | This AI system not meet the criteria of any of the six pillars that make up High Impact AI in Memorandum M-25-21 | Classical/Predictive Machine Learning | Clustering and Classification events | Improved automation of security threat hunting | Prediction of expected norms of log events | 01/09/2020 | Purchased from a vendor | Splunk | No | Prediction of expected norms of log events | Security Log Events | No | No | |||||||||||||
| Department Of Energy | SLAC - SLAC National Accelerator Laboratory (SC43 OIM) | DOE-458 | Machine Learning components within CrowdStrike | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | This AI system not meet the criteria of any of the six pillars that make up High Impact AI in Memorandum M-25-21 | Classical/Predictive Machine Learning | CrowdStrike uses machine learning to review security events in order to create notifications of detections and incidents for the SLAC Cybersecurity team. | From the use of CrowdStrike's machine learning components, SLAC receives the benefit of visibility to analyze possible security events | CrowdStrike Detections/Incidents | 27/01/2021 | Purchased from a vendor | CrowdStrike | Yes | CrowdStrike Detections/Incidents | CrowdStrike's machine learning model is trained on data generated by activity at SLAC | No | None of the above | No | ||||||||||||
| Department Of Energy | NREL - National Renewable Energy Laboratory (EE) | DOE-460 | AskOEDI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Tool is designed as an AI research assistant to help users find answers to questions about specific datasets beyond simple keyword searches. Disclaimers provided with the tool state that it should not be used for strategic decision making, nor actio | Generative AI | AskOEDI serves as a virtual research assistant to OEDI users. It provides answers to a variety of user-provided questions using natural language processing and generative machine learning. Users can get answers to questions about specific datasets, | Making data more accessible and user-friendly for the public. | The systems leverages Retrieval-Augmented Generation to find semantically relevent content which the AI (LLM) summarizes for the end user as a method to describe relevent content within the data catalog. | 01/10/2025 | Developed in house | AWS, Azure, OpenAI | Yes | The systems leverages Retrieval-Augmented Generation to find semantically relevent content which the AI (LLM) summarizes for the end user as a method to describe relevent content within the data catalog. | As this is summarizing data from the OEDI data repository the related datasets are described by the catalog which is also used to validate the AI responses: https://data.openei.org/ | https://data.openei.org | No | Yes | ||||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-462 | Hanford Search | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | To be retired | The purpose of the Hanford Search is to provide similar functionality to our Hanford Search application without needing multiple applications. The benefits of the AI are that there is a single interface with multiple uses and the AI can provide better, more relevant search results. Additionally, users can ask questions in natural language instead of needing to input specific search criteria. | The system outputs text respones from user prompts requesting information on grounded data related to Hanford Search Index | Yes | The system outputs text respones from user prompts requesting information on grounded data related to Hanford Search Index | Our Data does not train the models. | No | Yes | |||||||||||||||||||
| Department Of Energy | PPPL - Princeton Plasma Physics Laboratory (SC43 OIM) | DOE-463 | AI Chat Bot for IT User Services | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Interactive RAG chat bot model trained on existing, updated and new IT resource documentation in the user space. Platform will be used as an informative method for users to handle tier 1 IT issues and help guide to the correct place. | Generative AI | Interactive RAG chat bot model trained on existing, updated and new IT resource documentation in the user space. Platform will be used as an informative method for users to handle tier 1 IT issues and help guide to the correct place. | Interactive RAG chat bot model trained on existing, updated and new IT resource documentation in the user space. Platform will be used as an informative method for users to handle tier 1 IT issues and help guide to the correct place. | AI output will be recommendations and instructions based on training data from IT administrators in more user friendly responses | 25/09/2025 | Developed in house | No | AI output will be recommendations and instructions based on training data from IT administrators in more user friendly responses | Help Desk Knowledge Base Article and other supporting documentation in the user space | No | No | No | In-Progress | ||||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-464 | Hanford Popfon Search | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | See question 7 - use case retired. | The purpose of the Hanford Popfon Search is to provide similar functionality to our employee look-up application without needing multiple applications. The benefits of the AI are that there is a single interface with multiple uses and the previous application, which is older in architecture, can be retired, providing a safer, more secure, and cost effective alternative. Additionally, users can ask questions in natural language instead of needing to input specific search criteria. | The system outputs text respones from user prompts requesting information on grounded data related to employee contact and organization information | Yes | The system outputs text respones from user prompts requesting information on grounded data related to employee contact and organization information | Our Data does not train the models. | No | Yes | |||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-466 | AI for Intelligent Automation | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Used for general purpose generative AI. | Generative AI | workplace automation | Improve the timeliness and quality of manual work processes through automation where generative AI can make comparisons and decisions using INL procedures and controlled documents, with humans performing final validation and approval. | Completion of forms for human validation and approval. | Developed with both contracting and in-house resources | Not available | Yes | Completion of forms for human validation and approval. | At this time, INL plans to use non-CUI data with this solution, including the employee handbook, approved controlled documents, and other material that will assist workers in completing processes and activities. RAG (mini RAG preferred) is the method that will be used for integration with the AI solution. | Not available | Yes | Yes | Not available | ||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-467 | Cyber Threat Enrichment | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Not available. | Other | Cyber threat analysis | Enrich emerging cyber threat data with recent and past analysis data curated from the DOE-CESER Geo Threat Observable project and grid modernization project Deep Learning Malware using TBs of well structured cyber threat. Input vulnreability, weakness, malware inforamtion to find connections to well analyzed cyber threat including attack patterns, known exploited, spast mitigations and detections. When firmware binaries are analyzed and translated to structured threat used to create codified attack surfaces and Firmware or Software Bill of Materials (SBOM) for supply chain tracking | Structured Threat Information Expression (STIX) data format providing actionable and implementable codified contextual data for use in cyber security products or if firmware binaries analyzed and translated to STIX output is codified attack surfaces and SBOM. | Developed in house | No | Structured Threat Information Expression (STIX) data format providing actionable and implementable codified contextual data for use in cyber security products or if firmware binaries analyzed and translated to STIX output is codified attack surfaces and SBOM. | Open source threat intelligence collected, NLP used to scrape information off of cyber incident reports and websites, some data from cyber sensors, threat feeds and some data from manual threat analysis activities. | Not available. | No | Yes | Not available. | |||||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-469 | Hanford Ai Liaison (HAL) 1.1 | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | This AI system does not provide a principle basis for decisions or actions with legal, material binding, or have any significat effects of those items listed in OMB-25-21 in the definition of High Impact. | Generative AI | productivity efficiency | cost savings, increased efficiency, increased productivity, greater analytics of data | text answers to input questions | 28/10/2024 | Developed in house | Yes | text answers to input questions | Pre-trained from OpenAI | Not applicable | No | Not applicable | None of the above | Yes | Not applicable | Yes | Not applicable | Potential impacts were assessed by the Hanford AI SME | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |
| Department Of Energy | GDO HQ - Grid Deployment Office (GDO) | DOE-472 | Argonne Resilience AI Assistant ARAIA (nee CALLM) | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | The ARAIA (nee CALLM) project will have substantial impact in meeting current administration priorities for a secure and resilient power grid by ensuring that risks to the grid can be mitigated through planning, capital resource allocation | Generative AI | meeting current administration priorities for a secure and resilient power grid | The AI, the Argonne Resilience AI Assistant (ARAIA) (nee CALLM - Climate Action through Large Language Models), ARAIA is on track to meet the needs and outcomes outlined in the initial project scope and work plan, but is now better positioned to meet future needs and administration goals. addresses the problem of communicating complex climate projections and scientific literature to a broad audience, particularly electric sector stakeholders. It simplifies this information to help these stakeholders identify climate resilience solutions. Expected benefits include improved communication of climate science, empowering stakeholders to directly address climate change impacts, and accelerating the scalability to serve a wider range of users. This ultimately leads to more effective resilience planning and potentially cost savings through better informed decision-making. Users can interact with the system to retrieve specific data, such as fire weather indices, and receive actionable recommendations on areas like hazard mitigation planning, infrastructure wildfire risk, and comprehensive wildfire impact. This project represents a significant step forward in integrating cutting-edge AI with our resilience planning efforts, ultimately helping communities and decision-makers mitigate the impacts of natural hazards. | The AI output of Argonne Resilience AI Assistant (ARAIA) is information synthesized from complex climate projections and scientific literature, presented in a simplified and accessible format. This output helps users understand potential climate impacts and identify appropriate climate resilience solutions. The information is grounded in vetted data and published research to minimize inaccuracies and hallucinations common in large language models. The output could range from summaries of climate-related risks, to lists of potential adaptation strategies tailored to specific situations, depending on the user's input and the function of the system it's integrated with (like ClimRR). | No | The AI output of Argonne Resilience AI Assistant (ARAIA) is information synthesized from complex climate projections and scientific literature, presented in a simplified and accessible format. This output helps users understand potential climate impacts and identify appropriate climate resilience solutions. The information is grounded in vetted data and published research to minimize inaccuracies and hallucinations common in large language models. The output could range from summaries of climate-related risks, to lists of potential adaptation strategies tailored to specific situations, depending on the user's input and the function of the system it's integrated with (like ClimRR). | Argonne Resilience AI Assistant (ARAIA) tool utilizes vetted climate data and published climate resilience literature to train, fine-tune, and evaluate its performance. The specific datasets are not detailed here, but the approach emphasizes the use of established climate science information to ground the model's responses and mitigate inaccuracies. | No | |||||||||||||||||
| Department Of Energy | ANL - Argonne National Laboratory (SC43 OIM) | DOE-474 | WCD-AI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Impact of system failure or erroneous results is negligible. | Natural Language Processing | Recommend keyword-based search for relevant Lessons Learned published on DOE OPEXShare. | Recommend keyword-based search for relevant Lessons Learned published on DOE OPEXShare. | Recommended keywords for search based on user-authored Work Control Document. | 01/01/2023 | Developed in house | Yes | Recommended keywords for search based on user-authored Work Control Document. | Existing Work Control Document records in database system. | No | No | Yes | Not applicable | ||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-476 | AI for Isotopes (Pellet) Inspection | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | a) High-impact | High-impact | Computer Vision | Safety for employees | 1) Reduce technician radiation exposure. 2) Increased productivity. | Recommendation regarding pellet quality | Yes | Recommendation regarding pellet quality | No | Yes | ||||||||||||||||||
| Department Of Energy | ANL - Argonne National Laboratory (SC43 OIM) | DOE-477 | OPQ-AI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Impact of system failure or erroneous results is negligible. | Classical/Predictive Machine Learning | Semi-automated person ID matching with existing database before new accounts are created. | Significantly reduces person-hours for manual review of incoming people registrations to match with existing database records by recommending most likely matches, if an existing record is identified that matches with the registration details. | Recommendation of matched person record that already exists or that a new person record should be created. | 01/03/2021 | Developed in house | Yes | Recommendation of matched person record that already exists or that a new person record should be created. | Data includes read-only access internal person/HR records existing in current database systems. | No | No | Yes | Not applicable | Yes | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Direct usability testing | |||||||
| Department Of Energy | IM-50 - Architecture Engineering Technology and Innovation (IM) | DOE-479 | Funding Finder | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | It is retired. | The Funding Finder will aggregate FOAs from different DOE sources and enables users to ask questions when identifying opportunities and developing proposals. | Answers to questions about DOE FOAs. | Answers to questions about DOE FOAs. | |||||||||||||||||||||||
| Department Of Energy | IM-50 - Architecture Engineering Technology and Innovation (IM) | DOE-481 | PDF Analyzer | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not meet the requirements defined by OMB. | Generative AI | Summarization and knowledge retrieval. | PDF Analyzer will enable teams across the DOE to upload large PDFs and ask questions and generate content related to those PDFs. | PDF Analyzer will output the answers to a user's question along with the relevant sections of the PDF that the answer is based on. | 02/09/2024 | Developed with both contracting and in-house resources | Accenture Federal Services, Google | Yes | PDF Analyzer will output the answers to a user's question along with the relevant sections of the PDF that the answer is based on. | Google's Gemini family of models. | No | None of the above | No | ||||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-482 | Hanford Service Ticket Lookup | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | See question 7 - use case retired. | The purpose of the Hanford Service Ticket Lookup is to provide a single interface with for customers to ask questions and get to service tickets without having to navigate extensive menu's, tool bars, and search functions. Eventually, this will include service tickets from multiple platforms, providing the customer with a single interface to do all things service request related. Additionally, users can ask questions in natural language instead of needing to input specific search criteria. | The system outputs text respones from user prompts requesting information on grounded data related to Service Ticket Requests | Yes | The system outputs text respones from user prompts requesting information on grounded data related to Service Ticket Requests | Our Data does not train the models. | No | Yes | |||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-483 | INL AI Virtual Assitant (AiVA) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Used as a general/business chat agent for genAI. | Generative AI | work productivity | This chatbot uses commercial chatgpt-like capability to answer questions, provide coaching on processes, summarize and improve communications, and produce code in a variety of formats. INL has been authorized and has planned activiites in 2025 to begin adding internal INL non-CUI data using RAG. Examples include the employee handbook and approved controlled documents. | Outputs are consistent with commercial chatbot products, such as ChatGPT. | Developed in house | Yes | Outputs are consistent with commercial chatbot products, such as ChatGPT. | At this time, INL plans to use non-CUI data with this solution, including the employee handbook, approved controlled documents, and other material that will assist workers in completing processes and activities. RAG (mini RAG preferred) is the method that will be used for integration with the AI solution. | Not available | No | Yes | Not available | |||||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-485 | Unleashing AI Transformer Models on FPGAs for Accelerating LHC and Particle Physics | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project centers on the deployment of Transformer models for Field Programmable Gate Arrays (FPGA), in order to seamlessly integrate AI capabilities into particle physics experiments, specifically focusing on the L1 triggering schemes and real-ti | This effort focuses on Transformer models for representation learning on Field Programmable Gate Arrays (FPGA), in order to seamlessly integrate AI capabilities into particle physics experiments, specifically focusing on the CMS level-1 (L1) trigger at the High-Luminosity LHC (HL-LHC) and real-time magnet quench detection. While conventional methods for event identification have limitations, modern AI and machine learning techniques offer superior alternatives. | This AI system has two fold use cases, represnetation learning for LHC Trigger and multi-modal magnet quench detection algorithms. | 25/09/2025 | No | This AI system has two fold use cases, represnetation learning for LHC Trigger and multi-modal magnet quench detection algorithms. | research datasets from scientific experiments | No | Yes | |||||||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-486 | Hanford Procedure Search | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | To be retired | The purpose of the Hanford Search is to provide greater service in our customers search for relevant proceedures, which is a main look-up for many of our employees. The benefits of the AI are that there is a single interface with multiple uses and the AI can provide better, more relevant search results. Additionally, users can ask questions in natural language instead of needing to input specific search criteria. | The system outputs text respones from user prompts requesting information on grounded data related to the Hanford Procedure System | Yes | The system outputs text respones from user prompts requesting information on grounded data related to the Hanford Procedure System | Our Data does not train the models. | No | Yes | |||||||||||||||||||
| Department Of Energy | NREL - National Renewable Energy Laboratory (EE) | DOE-487 | LLM EV | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Not intended to produce outputs that are used as principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety | Generative AI | Large-scale policy analysis to understand permitting barriers to the deployment of electric vehicle charging infrastructure | Making NREL EV-specific data more accessible for researchers and accelerating research in this field. | Outputs analysis results from these studies: https://www.sciencedirect.com/science/article/pii/S2666546824000971 | 01/10/2024 | Developed in house | Azure, OpenAI | Yes | Outputs analysis results from these studies: https://www.sciencedirect.com/science/article/pii/S2666546824000971 | TBD | No | Yes | TBD | ||||||||||||
| Department Of Energy | NREL - National Renewable Energy Laboratory (EE) | DOE-488 | AskPRIMR | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Tool is designed as an AI research assistant to help users find answers to questions about specific datasets beyond simple keyword searches. Disclaimers provided with the tool state that it should not be used for strategic decision making. | Generative AI | The U.S. Department of Energy's Portal and Repository for Information on Marine Renewable Energy (PRIMRE) is an interconnected system of knowledge hubs that provide access to data, information, and other resources for the marine energy community. | Making data more accessible and user-friendly for the public. | The systems leverages Retrieval-Augmented Generation to find semantically relevent content which the AI (LLM) summarizes for the end user as a method to describe relevent content within the data catalog. | 01/10/2024 | Developed in house | AWS, Azure, OpenAI | Yes | The systems leverages Retrieval-Augmented Generation to find semantically relevent content which the AI (LLM) summarizes for the end user as a method to describe relevent content within the data catalog. | As this is summarizing data from the PRIMRE data repository the related datasets are described by the catalog which is also used to validate the AI responses: https://openei.org/wiki/PRIMRE | https://mhkdr.openei.org/ | No | Yes | https://mhkdr.openei.org/ | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-490 | GitHub Copilot with the OpenAI Codex | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Agentic AI | Accelerate coding tasks with AI-assisted code suggestions and automation. | This increases their productivity, allows them to focus more on innovation and research, and accelerates the development of high-quality software solutions for scientific research. | This increases their productivity, allows them to focus more on innovation and research, and accelerates the development of high-quality software solutions for scientific research. | 01/10/2024 | Developed with both contracting and in-house resources | OpenAI | Yes | This increases their productivity, allows them to focus more on innovation and research, and accelerates the development of high-quality software solutions for scientific research. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | NA-IM (Enterprise) - Office of the Associate Administrator for Information Management and Chief Information Officer (HQ-Enterprise) (NNSA HQ) | DOE-491 | First Alert DataMinr AI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Emergency Management | Deployed | c) Not high-impact | Not high-impact | This use case is determined to be not high-impact base on definitoin in Section 5 of the OMB Memorandum M-25-21. | Natural Language Processing | combine the text of publicly available news sources about events that share the same time and geographic location | We use this AI service to provide spectial or situational awareness, incidents occuring around NNSA or DOE Site. Emergency reporting, monitoring, speeds up emergency operations reporting. The AI is being used in place of a large team that would be required for coding and development of emergency services facilitation. This would be a cost saving software for the Federal government. | Data aggregation and reflection. | 03/01/2025 | Purchased from a vendor | Dataminr | Yes | Data aggregation and reflection. | This information is proprietaty to DataMinr. No information or contribution comes from NNSA. | Dataminr - FirstAlert is not publicly availabe and is not required to be. | No | None of the above | No | N/A | ||||||||||
| Department Of Energy | SLAC - SLAC National Accelerator Laboratory (SC43 OIM) | DOE-492 | Yurts AI search function | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | This AI system does not meet the criteria of any of the six pillars that make up High Impact AI in Memorandum M-25-21 | Generative AI | Access to data in various silos | Easier access to appropriate data | Chat and search with ability to modify responses to fit the users needs (i.e. tone and formality) | 13/03/2024 | Developed with both contracting and in-house resources | Legion (Previously Yurts) | No | Chat and search with ability to modify responses to fit the users needs (i.e. tone and formality) | SLAC Internal Documentation and user prompts | No | No | |||||||||||||
| Department Of Energy | PPPL - Princeton Plasma Physics Laboratory (SC43 OIM) | DOE-493 | Interactive platform to help review and create "Promoting Inclusive and Equitable Research" Plans | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Retired | A trained and informed model that can help to review PIER plans for accuracy, consistency and also to help integrate specific PPPL DEIA goals and initiatives, aligned with input from the user, helping to clearly define the goals of the research plan. | AI output will be revision suggestions to a user submitted PIER plan in order to align with PPPL specific goals and initiatives. It will also help to guide the user to create a more consistent plan with previously submitted/approved plans. | No | AI output will be revision suggestions to a user submitted PIER plan in order to align with PPPL specific goals and initiatives. It will also help to guide the user to create a more consistent plan with previously submitted/approved plans. | PPPL specific PIER plan guidelines, previously submitted and approved PIER plans, public DOE guidance and other leadership data to help refine plans that align with laboratory strategic goals. | No | No | |||||||||||||||||||
| Department Of Energy | NREL - National Renewable Energy Laboratory (EE) | DOE-494 | Energy Wizard | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | The tool is currently only available internally and serves as a research tool that enables the discovery and evaluation of NREL published research. | Generative AI | The tool aims to explore and extract meaningful insights from NREL's vast database of publications including but not limited to technical reports, presentations, and conference papers. There are over 56,000 publications in the NREL research hub and t | Making NREL data more accessible for researchers and accelerating research. | The systems leverages Retrieval-Augmented Generation to find semantically relevent content which the AI (LLM) summarizes for the end user as a method to describe relevent content within the selected publications and research profiles. | 01/08/2024 | Developed in house | AWS, Azure, OpenAI | Yes | The systems leverages Retrieval-Augmented Generation to find semantically relevent content which the AI (LLM) summarizes for the end user as a method to describe relevent content within the selected publications and research profiles. | As this is summarizing data from the OEDI data repository the related datasets are described by the catalog which is also used to validate the AI responses: https://data.openei.org/ | https://data.openei.org | No | Yes | https://github.com/NREL/elm | |||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-495 | LANL AI Portal | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | General purpose business system | Generative AI | "Democratized access to open-source/open-weights Large Language Models (LLMs) for general purpose office productivity, research of AI models, software development, operational streamlining, code development " | Democratized access to open-source/open-weights Large Language Models (LLMs) for general purpose office productivity, research of AI models, software development, operational streamlining, code development | Interactive text chat replies from user prompts, summaries of documents user submitted for Retrieval Augmented Generation (RAG), Replies to API queries from enterprise and scientific applications | 06/01/2025 | Developed in house | Amazon Web Services (hosting provider) | Yes | Interactive text chat replies from user prompts, summaries of documents user submitted for Retrieval Augmented Generation (RAG), Replies to API queries from enterprise and scientific applications | Not trained in-house, using open-source/open-weights models. Reliant on model provider transparency | No | None of the above | Yes | https://github.com/vllm-project/vllm https://github.com/awslabs/LISA https://github.com/BerriAI/litellm | Yes | Positive impact on laboratory cost/time efficency by reducing compliance burden on internal teams | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |||
| Department Of Energy | IM-50 - Architecture Engineering Technology and Innovation (IM) | DOE-497 | SmartPD Creator | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | It does not meet the definition defined by OMB. | Generative AI | It improves the speed and accuracy for DOE employees to create position descriptions. | The time to hire much needed resources will be reduced and the process will be greatly improved. | Position descriptions for federal roles. | 02/09/2024 | Developed with both contracting and in-house resources | Accenture Federal Services, Google | Yes | Position descriptions for federal roles. | Google's Gemini familiy of models. | No | None of the above | No | ||||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-498 | ChatGPT Enterprise | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | General purpose business system | Generative AI | "General business productivity, research of AI models " | General business productivity, research of AI models | Interactive text chat replies from user prompts, summaries of open/public/unrestricted documents user submitted for Retrieval Augmented Generation (RAG) through CustomGPTs | 27/05/2024 | Purchased from a vendor | OpenAI (SAAS hosting provider) | No | Interactive text chat replies from user prompts, summaries of open/public/unrestricted documents user submitted for Retrieval Augmented Generation (RAG) through CustomGPTs | Not trained on any agency or LANL data | No | None of the above | No | Yes | Positive impact on laboratory cost/time efficency by making a market-leading research tool available to all LANL employees | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||
| Department Of Energy | ANL - Argonne National Laboratory (SC43 OIM) | DOE-499 | Argo | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Impact of system failure or erroneous results is negligible. | Generative AI | Broad usage across science and lab-ops for use cases that could benefit with genAI techniques. | Enables anyone within the Argonne community to leverage text-based generative AI with their Argonne-specific information and data, including sensitive research or operational data up to and including CUI. | Large language model responses (prediction-based) following user prompting. | 01/11/2023 | Developed in house | Yes | Large language model responses (prediction-based) following user prompting. | N/A – we are using existing pre-trained large language models, no training required. | No | No | Yes | Not applicable | Yes | Development of monitoring protocols is in-progress | Yes, sufficient and periodic training has been established | Yes | Not applicable | Direct usability testing | ||||||
| Department Of Energy | PPPL - Princeton Plasma Physics Laboratory (SC43 OIM) | DOE-500 | AI Chat Bot for Facility Sustainability Practices | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Retired | Interactive RAG chat bot model trained on facility recycling, composting and trashing guidelines to inform users how to handle niche cases for sustainably getting rid of unwanted items. This should reduce confusion when throwing items out and also increase the amount of properly recycled items at PPPL. | AI output will be recommendations and instructions based on training data from facility data on recycling, trash and composting | No | AI output will be recommendations and instructions based on training data from facility data on recycling, trash and composting | Facility documentation on proper recycling, trash and composting guidelines. Location data for areas where specific items can be thrown away. Dynamic training on publicly maintained websites to interpret updated guidelines within PPPL and in the complex, as well as data on upcoming events. | No | No | |||||||||||||||||||
| Department Of Energy | NREL - National Renewable Energy Laboratory (EE) | DOE-501 | AskGDR | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Tool is designed as an AI research assistant to help users find answers to questions about specific datasets beyond simple keyword searches. | Generative AI | AskGDR serves as a virtual research assistant to GDR users. It provides answers to a variety of user-provided questions using natural language processing and generative machine learning. Users can get answers to questions about specific datasets. | Making data more accessible and user-friendly for the public. | The systems leverages Retrieval-Augmented Generation to find semantically relevent content which the AI (LLM) summarizes for the end user as a method to describe relevent content within the data catalog. | 01/10/2024 | Developed in house | AWS, Azure, OpenAI | Yes | The systems leverages Retrieval-Augmented Generation to find semantically relevent content which the AI (LLM) summarizes for the end user as a method to describe relevent content within the data catalog. | As this is summarizing data from the GDR data repository the related datasets are described by the catalog which is also used to validate the AI responses: https://gdr.openei.org/ | https://gdr.openei.org/ | No | Yes | ||||||||||||
| Department Of Energy | PA HQ - Office of Public Affairs (PA) | DOE-502 | Topic Modeling for Energy.gov | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Because it doesn't impact an individual or entity's civil rights, civil liberties, or privacy; or an individual or entity's access to education, housing, insurance, credit, employment, and other programs; an individual or entity's access to critical | Minimize time spent through manual effort of reading and tagging 100,000 Energy.gov webpages. | A list of five tags that best categorize an Energy.gov webpage. | Yes | A list of five tags that best categorize an Energy.gov webpage. | Web content from Energy.gov is being used for this use case. | |||||||||||||||||||||
| Department Of Energy | Y-12 - Consolidated Nuclear Security Y-12 (YFO) | DOE-503 | Boston Dynamics Spot Robotics | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Emergency Management | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Obstacle avoidance without human intervention | Automation of the robots movements | Development and testing of robotic use for security and emergency responses, helping to decide on the "best" path for the robot to move | 05/07/2025 | Purchased from a vendor | Boston Dynamics | Yes | Development and testing of robotic use for security and emergency responses, helping to decide on the "best" path for the robot to move | No | None of the above | Yes | ||||||||||||||
| Department Of Energy | NA-IM (Enterprise) - Office of the Associate Administrator for Information Management and Chief Information Officer (HQ-Enterprise) (NNSA HQ) | DOE-504 | DNA-P Use Cases Leaverging Artificial Intellegence (Pilot) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | This use case is determined to be not high-impact base on definitoin in Section 5 of the OMB Memorandum M-25-21. | Other | "- identify data clusters/trend analysis - identify data discrepencies/data enrichment - generate suggestions (including generating reports, data linkages, and courses of action) - generate graphical and natural language analyses" | "-save time for DOE DNA-P users - improve DOE/NNSA data quality - improve DOE/NNSA safety operations" | "- data clusters - summaries - recommendations (data linkages, data entries, and courses of action) - graphical and natural language analyses - extracted entities -responses to queries via RAG workflows" | 01/04/2024 | Developed in house | Palantir | Yes | "- data clusters - summaries - recommendations (data linkages, data entries, and courses of action) - graphical and natural language analyses - extracted entities -responses to queries via RAG workflows" | "- No custom models developed - AI use cases have been deployed on publicly available information as well as agency provided data" | No | PIA not publically available | None of the above | No | PIA not publically available | ||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-506 | Advanced Peer to Peer Transactive Energy Platform with Predictive Optimization | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Applying AI/ML to optimize renewable energy generation and consumption on Smart Grid with block chain technologies | Applying AI/ML to optimize renewable energy generation and consumption on Smart Grid with block chain technologies | Applying AI/ML to optimize renewable energy generation and consumption on Smart Grid with block chain technologies | Applying AI/ML to optimize renewable energy generation and consumption on Smart Grid with block chain technologies | ||||||||||||||||||||
| Department Of Energy | NR HQ - Naval Reactors (NR) | DOE-507 | ServiceNow Virtual Agent Natural Language Understanding | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | System is used as part of IT Service Management and does not meet the criteria outlined in section 5 of OMB Memorandum M-25-21. | Natural Language Processing | Aid in users receiving better IT support for incident reporting and service delivery. | Quicker and easier access for users to access pre-built IT Service Management incident and request templates. | Pre-built IT Service Management incident and request templates. The Virtual Agent NLU is only used to understand the users intent and entity, where it then performs a search of the service catalog to return the most relevant result. | 30/06/2025 | Purchased from a vendor | ServiceNow | Yes | Pre-built IT Service Management incident and request templates. The Virtual Agent NLU is only used to understand the users intent and entity, where it then performs a search of the service catalog to return the most relevant result. | The NLU is provided common phrases to recognize related to opening a ticket, closing a ticket, checking the status of a ticket, updating a ticket, searching a knowleddge article, and connecting with a live agent. | No | No | |||||||||||||
| Department Of Energy | SLAC - SLAC National Accelerator Laboratory (SC43 OIM) | DOE-508 | ServiceNow Predictive Intelligence | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | This AI system does not meet the criteria of any of the six pillars that make up High Impact AI in Memorandum M-25-21 | Classical/Predictive Machine Learning | Reduce error rate of categorization of incidents in ServiceNow | Reduction of errors in the categorization of incidents | Predictions of categorization of incidents | Predictions of categorization of incidents | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-510 | Microsoft Bing Service | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-511 | Ariculate 360 AI Assistant | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | Purchased from a vendor | Articulate | No | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Applicable | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | |||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-512 | Microsoft Azure Quantum Elements | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Accelerate chemistry and materials discovery using AI, HPC, and quantum-ready tools. | Speeds discovery of new materials and chemicals for energy solutions. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | Microsoft Corporation | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-513 | Nanoparticle growth kinetics and mechanism | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | AI is used to digitalize the nanoparticle from TEM images. Morphology and crystalline feature will be made available through AI. Combining with kinetics modeling, AI detects critical material transformation event. The CFN TEM facility will be enhance | AI is used to digitalize the nanoparticle from TEM images. Morphology and crystalline feature will be made available through AI. Combining with kinetics modeling, AI detects critical material transformation event. The CFN TEM facility will be enhanced by AI pipelines. | AI is used to digitalize the nanoparticle from TEM images. Morphology and crystalline feature will be made available through AI. Combining with kinetics modeling, AI detects critical material transformation event. The CFN TEM facility will be enhanced by AI pipelines. | AI is used to digitalize the nanoparticle from TEM images. Morphology and crystalline feature will be made available through AI. Combining with kinetics modeling, AI detects critical material transformation event. The CFN TEM facility will be enhanced by AI pipelines. | ||||||||||||||||||||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-514 | Bernie-AI: Infrastructure Planning Support POC | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Not being used to make critical runtime decisions. | Generative AI | Quick access to specific building, and major asset information to be used for planning and predictive maintenance purposes | Ability to quickly find and access building and asset information to support planning activities | Specific, data-driven answers to building and major asset use and impacts | 30/06/2025 | Developed in house | No | Specific, data-driven answers to building and major asset use and impacts | Building, asset, maintenance data | No | No | None of the above | Yes | N/A | In-Progress | Not applicable | Not applicable | Other | |||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-516 | Center for Mesoscale Transport Properties | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Center for Mesoscale Transport Properties | Center for Mesoscale Transport Properties | Center for Mesoscale Transport Properties | Center for Mesoscale Transport Properties | ||||||||||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-517 | Develop a Machine Learning Framework for Optimal Computational Campaigns for Complex Uncertain Systems | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | 'Two projects to use machine learning to acclerate the design of optimal strategies and robust computational campaigns for complex systems in the presence of substantial data and model uncertainty, and/or which have processes that span multiple scale | 'Two projects to use machine learning to acclerate the design of optimal strategies and robust computational campaigns for complex systems in the presence of substantial data and model uncertainty, and/or which have processes that span multiple scales' | 'Two projects to use machine learning to acclerate the design of optimal strategies and robust computational campaigns for complex systems in the presence of substantial data and model uncertainty, and/or which have processes that span multiple scales' | 'Two projects to use machine learning to acclerate the design of optimal strategies and robust computational campaigns for complex systems in the presence of substantial data and model uncertainty, and/or which have processes that span multiple scales' | ||||||||||||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-518 | Microsoft Co-Pilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | This AI system does not provide a principle basis for decisions or actions with legal, material binding, or have any significat effects of those items listed in OMB-25-21 in the definition of High Impact. | Generative AI | Better analytics of data in our microsoft applications | Better analytics of data in our microsoft applications | text answers to input questions | 01/09/2025 | Purchased from a vendor | Microsoft | Yes | text answers to input questions | Pre-trained from OpenAI and access to employees microsoft data sources | Not applicable | No | Not applicable | None of the above | No | Not applicable | Yes | Not applicable | Potential impacts were assessed by the Hanford AI SME | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | Direct usability testing |
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-519 | Visual Studio Enterprise | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | 01/02/2023 | Purchased from a vendor | Microsoft | No | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||||
| Department Of Energy | EE HQ - EE Headquarters (EE) | DOE-520 | NLP Data Analytics for Program and Portfolio Insights | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Produces analytic insights only; does not affect rights, benefits, or binding decisions. | Natural Language Processing | Improve analytical capabilities and self service to organizational data. Increase process efficiency and analytical insights. | Faster and more consistent analysis, earlier risk detection, enhanced decision-making. | Dashboards, SQL Query results, structured datasets, extracted keywords, sentiment, thematic trends. | Dashboards, SQL Query results, structured datasets, extracted keywords, sentiment, thematic trends. | ||||||||||||||||||||
| Department Of Energy | Pantex - PanTeXas Deterrence Pantex (PFO) | DOE-521 | Command Media | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | The AI output is not a basis for decisions or actions. It is for information retrieval. | Generative AI | The AI is intended to solve the inefficiency of plant personnel who have questions about policies, procedures, and work instructions by providing a more direct way to find information. | The expected benefit is increased effiency for plant personnel. | Questions posed by plant personnel are answered by leveraging the knowledge corpus and providing answers in natural language text and links to relevent documents. | Developed in house | No | Questions posed by plant personnel are answered by leveraging the knowledge corpus and providing answers in natural language text and links to relevent documents. | ||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-522 | Ask CAS | Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | GenAI-RAG to support issues management | Productivity Tool | Text | 01/10/2024 | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-523 | AI/ML in High Energy Physics Research | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Within the high energy physics research in the energy, intensity and cosmic frontiers, as well as the advance detector R&D and scientific computing, AI/ML techniques have been developed and applied to solving a variety of problems in studying particl | Within the high energy physics research in the energy, intensity and cosmic frontiers, as well as the advance detector R&D and scientific computing, AI/ML techniques have been developed and applied to solving a variety of problems in studying particle physics and cosmology. | Within the high energy physics research in the energy, intensity and cosmic frontiers, as well as the advance detector R&D and scientific computing, AI/ML techniques have been developed and applied to solving a variety of problems in studying particle physics and cosmology. | Within the high energy physics research in the energy, intensity and cosmic frontiers, as well as the advance detector R&D and scientific computing, AI/ML techniques have been developed and applied to solving a variety of problems in studying particle physics and cosmology. | ||||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-524 | ESH&Q NEPA App | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Used for general purpose generative AI within a COTS product. | Other | Not available | Speeds up environmental review processes and improves compliance accuracy, reducing delays in project approvals. | Information summaries | Developed with both contracting and in-house resources | Not available | No | Information summaries | No | No | Yes | Not available | |||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-525 | Microsoft OneNote | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-526 | OpenAI ChatGPT Pilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | General chatbot knowledge and capabilities | Generative AI | Quick access to general knowledge. | Ability to quickly find and access general knowledge and business productivity | General answers to questions, summarized documentation, document creation, general research. | 30/06/2025 | Purchased from a vendor | OpenAI | No | General answers to questions, summarized documentation, document creation, general research. | Not involved in training | No | No | None of the above | No | N/A | In-Progress | Not applicable | Not applicable | Other | ||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-527 | ACORN (Autonomous Operation for Reactor Technologies) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | currently focusing on reactor simulator or auxilary moderator displacement rod with minimal impacts on reactor safety | Classical/Predictive Machine Learning | AI automatically identifies process models from simulation and operational data, solves for optimal control actions that can achieve user-defined objectives, executes actions and observes system responses | reduce labor and costs to performance operation tasks in advanced reactors and microreactors | optimal control actions | 01/09/2023 | Developed with both contracting and in-house resources | Open source development | Yes | optimal control actions | Time series data from sensors or simulation results | Not available | No | Yes | Not available | |||||||||||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-529 | NIF Shot Analytics & Predictive Maintenance Support Pilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Not being used to make critical runtime decisions. | Classical/Predictive Machine Learning | Quick access to specific NIF laser data and problem resolution information | Ability to quickly find and access targeted NIF maintenance knowledge | Specific, data-driven answers to NIF shot maintenance questions | 30/06/2025 | Purchased from a vendor | C3 | No | Specific, data-driven answers to NIF shot maintenance questions | NIF shot data and support ticket information | No | No | None of the above | No | N/A | In-Progress | Not applicable | Not applicable | Other | ||||||
| Department Of Energy | KCNSC - Kansas City National Security Campus (KCFO) | DOE-530 | Merlin - KCNSC Generative AI with RAG | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Output does not serve as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety. Assists in developing software code that may accept product, but this AI will not make those decision | Generative AI | Intended to address the opportunity to enhance overall productivity. No definite 'problem' being solved, just capitalizing on an opportunity to leverage industry investment in generative AI broadly. | It serves as a general productivity enhancer -not driven by a specific problem, but by the opportunity to improve efficiency. | outputs are context-aware responses that combine generated content with retrieved, authoritative information to ensure accuracy, relavance, and grounding. | 16/07/2025 | Developed in house | N/A - Open Source Integration | Yes | outputs are context-aware responses that combine generated content with retrieved, authoritative information to ensure accuracy, relavance, and grounding. | Leveraging OpenAI open-sourced models, so they are responsible for providing training data | No | None of the above | No | https://huggingface.co/openai/gpt-oss-120b | |||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-531 | IDAES-PSE | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Offers extensive process systems engineering (PSE) capabilities for optimizing the design and operation of complex, interacting technologies and systems. | Opitmize the design and operation of complex, interacting technologies and systems. | Models | Models | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-532 | NRAP-Open-IAM | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Enables quantification of containment effectiveness and leakage risk at carbon storage sites in the context of system uncertainties and variability. | Enables quantification of effectiveness and risk. | Data | Data | ||||||||||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-533 | SMMM | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | AI/ML is being used to evaluate measurements in real-time during simultaneous experiments on two beamlines and then drive subsequent data collection on both of the beamlines to maximize the scientific value generated per time. | AI/ML is being used to evaluate measurements in real-time during simultaneous experiments on two beamlines and then drive subsequent data collection on both of the beamlines to maximize the scientific value generated per time. | AI/ML is being used to evaluate measurements in real-time during simultaneous experiments on two beamlines and then drive subsequent data collection on both of the beamlines to maximize the scientific value generated per time. | AI/ML is being used to evaluate measurements in real-time during simultaneous experiments on two beamlines and then drive subsequent data collection on both of the beamlines to maximize the scientific value generated per time. | ||||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-534 | Custom GenAI for eVinci Microreactor Engineering (MauroGPT) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | a) High-impact | High-impact | Generative AI | Reactor engineers spending extensive time manually iterating through documents during the reactor engineering process | Reactor engineers save time by using the AI to quickly find answers and relevant source documents. | Answers to reactor design and engineering questions with citations back to source documents. | 04/08/2025 | Developed with both contracting and in-house resources | Open source development with INL Advanced Analystics Center of Excellence | Yes | Answers to reactor design and engineering questions with citations back to source documents. | Advanced reactor engineering schematics | Not available | No | No | Not available | ||||||||||||
| Department Of Energy | NR HQ - Naval Reactors (NR) | DOE-535 | AI Builder Document Scraping | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | AI Builder scrapes PDF files for text. The end user is responsible for verifying the quality of the final text. This does not meet the criteria outlined in section 5 of OMB Memorandum M-25-21. | Computer Vision | Enhance and automate PDF scraping for large sets of PDF files. | Improved Microsoft PDF scraping model that allows the user to provide a training set for their PDFs. | Text and file output in Microsoft applications. | 03/02/2025 | Purchased from a vendor | Microsoft | Yes | Text and file output in Microsoft applications. | Microsoft AI Builder has a built in PDF scraping model and learns from a set of PDF files where the location of data exists within the layout of PDF. | No | No | |||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-536 | Microsoft Azure Authoring Tools | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NREL - National Renewable Energy Laboratory (EE) | DOE-537 | WETO SA | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Not intended to produce outputs that are used as principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety | Generative AI | Assess the developable quantity and quality of the renewable resources. | It seeks to better understand the uncertainty and impact of siting considerations and to understand when and where wind technological innovation may help to overcome potential land barriers. | Robust surrogate models to deliver meaningful insights across a broad set of technology innovations and site characteristics while overcoming computational challenges | 01/10/2024 | Developed in house | Azure, OpenAI | Yes | Robust surrogate models to deliver meaningful insights across a broad set of technology innovations and site characteristics while overcoming computational challenges | No | No | ||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-538 | AI Assistant Phase 2 Simple Chat | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Doesn't meet the criteria. | Generative AI | Emplyees at ORNL more productive | Enhanced employee productivity | Natural language responses based on a wide variety of file and text based input. | 04/08/2025 | Developed in house | Yes | Natural language responses based on a wide variety of file and text based input. | Open AI training data | No | Yes | None of the above | Yes | yes | |||||||||||
| Department Of Energy | LM HQ - Office of Legacy Management (LM) | DOE-539 | Drone Imagery Analysis | Pilot – The use case has been deployed in a limited test or pilot capacity. | Energy & the Environment | Pilot | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | AI used to allow drone to improve human safety in imagery collection and analysis. | Simple and efficient asset inspection with improved safety factor. | Automatically seam together images for visualization by humans. | 01/10/2022 | Purchased from a vendor | TBD | No | Automatically seam together images for visualization by humans. | Human review will be used to validate. | No | None of the above | No | In-Progress | Improved management of the remedy | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Yes | Not applicable | Direct usability testing | ||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-540 | OpenAI Enterprise | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | This AI use case does not serve as a principal basis for decisions or actions with legal, material, and binding | Generative AI | Provide enterprise-grade AI assistance with secure access to OpenAI GPT models. | Provides secure, reliable access to advanced AI capabilities for research. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | OpenAI | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-541 | VIPER (Visualizaiton for Predictive Maintenance Recommendation) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | VIPER presents comprehensive system health diagnostics, explainability metrics, and actionable recommendations to system engineers at nuclear power plants, enabling informed decision-making through an easy-to-use visualization interface. A multi-mode | reduce labor and costs to performance maintenance tasks in existing light water reactors | system diagnostic and prognostic results, system decription and root cause explanations | 01/09/2024 | Developed with both contracting and in-house resources | Open Source development | Yes | system diagnostic and prognostic results, system decription and root cause explanations | sensor data from Salem and Hope Creek nuclear power plants, operated by PSEG NRC, EPRI, and INL public reports | Not available | No | Yes | Not available | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-543 | Decisions AI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Enhance meeting management by integrating AI insights with Decisions platform. | Enhances meeting effectiveness and decision-making outcomes. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | OpenAI | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-544 | Poseidon | Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Other | Content analysis across domains and structured/unstructured content for SCRM | Productivity Tool | Text | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |||||||||
| Department Of Energy | LM HQ - Office of Legacy Management (LM) | DOE-545 | Soil Moisture Modeling | Pilot – The use case has been deployed in a limited test or pilot capacity. | Energy & the Environment | Pilot | c) Not high-impact | Not high-impact | This AI use case does not serve as a principal basis for decisions or actions with legal, material, and binding | Classical/Predictive Machine Learning | Machine learning solves issues with soil moisture by enhancing accuracy and efficiency in data analysis | The ability to determine evapotranspiration rates on disposal cell cover using publicly available data from satellites. | Multi-layer soil moisture model/prediction | 01/02/2022 | Purchased from a vendor | University of Montana | No | Multi-layer soil moisture model/prediction | Data is held back from the models to validate model outputs. | No | None of the above | Yes | Yes | Improved management of the remedy | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Yes | Not applicable | Direct usability testing | ||||
| Department Of Energy | NR HQ - Naval Reactors (NR) | DOE-546 | Azure Document Intelligence | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Azure Document Intelligence scrapes PDF files for text. The end user is responsible for verifying the quality of the final text/product. This does not meet the criteria outlined in section 5 of OMB Memorandum M-25-21. | Computer Vision | Enhance and automate PDF scraping for large sets of PDF files. | Improved Microsoft PDF scraping model that allows the user to provide a training set for their PDFs. | Text and file output in Microsoft applications or Azure Synapse Datalake | Text and file output in Microsoft applications or Azure Synapse Datalake | ||||||||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-547 | AI-Enhanced Hub | Pre-deployment – The use case is in a development or acquisition status. | Human Resources | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Enhance staff matching and profiles with AI-driven HUB search capabilities. | Enhances collaboration and expertise matching across the lab. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | EE HQ - EE Headquarters (EE) | DOE-548 | OpenText for Records Management (Email Auto-Classification) | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not fall within the requirements for high impact. | ||||||||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-549 | Apple Intelligence | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | 28/10/2024 | Purchased from a vendor | Apple | No | Proprietary/unknown data set used for model training. | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | |||||
| Department Of Energy | SRS - SRNL - Savannah River Site - Savannah River National Laboratory (EM) | DOE-550 | CURIE - Conversational Unified Research Information Engine | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Not used for organizational outcomes or work products | Generative AI | Turning unstructured questions, tasks, or ideas into structured outcomes | increasing productivity, enhance decision-making | Textual outputs | Textual outputs | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-551 | Microsoft ScreenSketch | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | Yes | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-552 | Microsoft Visual C++ Additional Runtime | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-553 | A Digital Twin for In-silico Spatiotemporally-resolved Experiments | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | A Digital Twin for In-silico Spatiotemporally-resolved Experiments | A Digital Twin for In-silico Spatiotemporally-resolved Experiments | A Digital Twin for In-silico Spatiotemporally-resolved Experiments | A Digital Twin for In-silico Spatiotemporally-resolved Experiments | ||||||||||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-554 | Machine learning for accelerated understanding of dynamic catalysis | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | The understanding of catalytic reactions has been a long-standing challenge due to the complexity and wide range of time scales involved in their mechanisms. There remain significant gaps in the understanding of how the catalyst's atomistic structure | The understanding of catalytic reactions has been a long-standing challenge due to the complexity and wide range of time scales involved in their mechanisms. There remain significant gaps in the understanding of how the catalyst's atomistic structure determines the activity of reactions and how it is transiently transformed under varying operating conditions. The proposed effort seeks to take on this challenge with a data-science-driven approach to computational modeling, joining it with advanced experimental methods of characterization to create new methods for capturing realistic complexity of reactions at heterogenous and disordered interfaces. As a prototype application, we will focus on the water gas shift reaction (WGSR) CO + H2O → CO2 +H2, as carried out over an active oxide (ceria CeO2) supported nanoscale Pt cluster catalyst. The Pt/CeO2 system is a high activity, low temperature catalyst for WGSR in which transient catalyst reconstructions and fluxional oscillating behavior of active sites at the metal-support interface play an essential role. | The understanding of catalytic reactions has been a long-standing challenge due to the complexity and wide range of time scales involved in their mechanisms. There remain significant gaps in the understanding of how the catalyst's atomistic structure determines the activity of reactions and how it is transiently transformed under varying operating conditions. The proposed effort seeks to take on this challenge with a data-science-driven approach to computational modeling, joining it with advanced experimental methods of characterization to create new methods for capturing realistic complexity of reactions at heterogenous and disordered interfaces. As a prototype application, we will focus on the water gas shift reaction (WGSR) CO + H2O → CO2 +H2, as carried out over an active oxide (ceria CeO2) supported nanoscale Pt cluster catalyst. The Pt/CeO2 system is a high activity, low temperature catalyst for WGSR in which transient catalyst reconstructions and fluxional oscillating behavior | The understanding of catalytic reactions has been a long-standing challenge due to the complexity and wide range of time scales involved in their mechanisms. There remain significant gaps in the understanding of how the catalyst's atomistic structure determines the activity of reactions and how it is transiently transformed under varying operating conditions. The proposed effort seeks to take on this challenge with a data-science-driven approach to computational modeling, joining it with advanced experimental methods of characterization to create new methods for capturing realistic complexity of reactions at heterogenous and disordered interfaces. As a prototype application, we will focus on the water gas shift reaction (WGSR) CO + H2O → CO2 +H2, as carried out over an active oxide (ceria CeO2) supported nanoscale Pt cluster catalyst. The Pt/CeO2 system is a high activity, low temperature catalyst for WGSR in which transient catalyst reconstructions and fluxional oscillating behavior | ||||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-555 | Computer Vision for Defect Detection | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | The AI use case focuses on product quality control and does not significantly affect legal, material, or critical access to services rights. | Computer Vision | "To automate and enhance the detection of visible defects in products, improving quality control and reducing production errors." | Improved product quality, reduced production costs, less need for manual inspections, and enhanced productivity. The AI system will lead to significant cost savings and better consistency in products over time. | "The AI system outputs will include identified defects in product images, which will then be reviewed and verified by human operators." | 01/10/2024 | Developed in house | SRNS - OT In House Staff | "The AI system outputs will include identified defects in product images, which will then be reviewed and verified by human operators." | |||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-556 | Microsoft Project | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-558 | HeyGen | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Explore the generation of engaging internal training videos using AI avatars. | Makes internal training more engaging and effective. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Purchased from a vendor | HeyGen | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-559 | Nuclear Safety Analysis | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | a) High-impact | High-impact | Other | Safety, risk, and reliability analysis is performed by reactor designers, developers, and plant operators to ensure safe design and operation of the reactor and the plant; and is required by the regulators as part of license application. Performing s | The outcome of this effort will provide a tool to the safety analysis teams which will enable them to automate creating the risk models resulting in significant reduction in time spent on performing safety analysis, writing SARs, and conducting the regulatory review of safety case. | Failure Modes and Effects Analysis Fault tree analysis Creating risk assessment models | Developed with both contracting and in-house resources | Not available | No | Failure Modes and Effects Analysis Fault tree analysis Creating risk assessment models | Not available | No | Yes | Not available | ||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-56 | CrewAI | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | No high-impact category applies | Agentic AI | Automate and orchestrate workflows across LLMs and cloud platforms. | Boosts efficiency by automating complex workflows across platforms. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/01/2025 | Purchased from a vendor | CrewAI | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-560 | AI for QA Audit | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Improve quality assurance at different levels of the Lab. | Improved quality assurance | Reccomendation to existing QA audit submissions | Reccomendation to existing QA audit submissions | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-561 | Google Chrome Generative AI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not meet the criteria for High-Impact. Unless explicitly deployed in a safety-critical or classified environment, it should be considered Not High Impact. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 06/12/2023 | Purchased from a vendor | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | |||
| Department Of Energy | IM-50 - Architecture Engineering Technology and Innovation (IM) | DOE-562 | PWS Builder | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | Improved speed to create a performance work statement document. | DOE employees would be able to quickly and accurately draft performance work statements for projects both net-new and in-flight. This would greatly reduce the time needed to get a project up and running. | Performance work statements. | 01/10/2024 | Developed with both contracting and in-house resources | Accenture Federal Services, Google | Yes | Performance work statements. | Google's Gemini family of models. | No | None of the above | No | ||||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-563 | ServiceNow Classification Prediction | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | It does not meet the definition outlined by OMB. | Classical/Predictive Machine Learning | Inconsistencies in classification values determined by human technicians | Improved automation | Prediction | 01/11/2024 | Purchased from a vendor | ServiceNow | Yes | Prediction | Existing LANL IT ticket data stored in GCC Data Center certified as FedRAMP High. Training servers in same environment. | No | None of the above | No | Yes | Minimal impact as AI is used purely for predictive purposes to aid trained technicians. Humans remain the decision makers whether to use the AI's output. | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Yes, an appropriate appeal process has been established | Other | ||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-564 | Safeguards Digital Twin | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | ServiceNow's AI output is NOT a principal driver of a government decision that would meaningfully affects people's rights, safety, critical access, or strategic assets. AI in ServiceNow is strictly used to help manage IT resources for LANL. | Classical/Predictive Machine Learning | This deals with international safeguars approaches. | Reduce the burden for inspectors by increasing synthesizing data and announcing anomalies. | Flag for when off-normal operations is detected along with expected material generated. | Developed with both contracting and in-house resources | Not available | No | Flag for when off-normal operations is detected along with expected material generated. | Reactor physics data from Serpent | Not available | No | Yes | Not available | ||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-565 | Use AI/ML ML to Optimize Data and Experiments at National Synchrotron Light Source II (NSLS-II) and the Accelerator Test Facility (ATF) | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | This deals with international safeguars approaches. | Classical/Predictive Machine Learning | Three projects: use ML for denoising in scientific images; use ML to mine large quantities of data for automated evaluation of data quality and predictive analysis; develop AI/ML infrastructure to tune / align / optimize instruments/beamlines at NSL | Three projects: use ML for denoising in scientific images; use ML to mine large quantities of data for automated evaluation of data quality and predictive analysis; develop AI/ML infrastructure to tune / align / optimize instruments/beamlines at NSLS-II and ATF | Three projects: use ML for denoising in scientific images; use ML to mine large quantities of data for automated evaluation of data quality and predictive analysis; develop AI/ML infrastructure to tune / align / optimize instruments/beamlines at NSLS-II and ATF | Three projects: use ML for denoising in scientific images; use ML to mine large quantities of data for automated evaluation of data quality and predictive analysis; develop AI/ML infrastructure to tune / align / optimize instruments/beamlines at NSLS-II and ATF | ||||||||||||||||||||
| Department Of Energy | KCNSC - Kansas City National Security Campus (KCFO) | DOE-567 | Tabnine AI Pair Programmer | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | AI provides structured recommendations only; human reviewers retain authority for evaluation decisions | Generative AI | Accelerate and simplify software development across our entire site. There is no definite 'problem' being solved, just capitalizing on an opportunity to leverage industry investment in generative AI for software development | This use case will boost engineering velocity, code quality, and developer happiness by automating the coding workflow through AI tools customized to our teams. | Expediated quality software for Test Engineering | 16/07/2025 | Purchased from a vendor | Tabnine | Yes | Expediated quality software for Test Engineering | Tabnine is responsible for model training, containerization, and updates to their software | No | None of the above | No | ||||||||||||
| Department Of Energy | IM-60 - IM Enterprise Operations and Shared Services (IM) | DOE-568 | AI-Based Chat Bot | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Output does not serve as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety. Assists in developing software code that may accept product, but this AI will not make those decision | ||||||||||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-569 | Ask Alan | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | GenAI/AI interactive learning | Productivity Tool | Text | 01/09/2025 | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | Y-12 - Consolidated Nuclear Security Y-12 (YFO) | DOE-57 | Elastic Stack Technology (ELK) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | No high-impact category applies | Classical/Predictive Machine Learning | Increase searchability of documents for information pertinent to the mission | Enable intelligent media cataloging to assist with data discovery | Intelligent collection content searching using ElasticSearch, Logstash, and Kibana | 01/11/2022 | Purchased from a vendor | Elastic | Yes | Intelligent collection content searching using ElasticSearch, Logstash, and Kibana | Knowledge Preservation Management (KPM) Media | No | None of the above | Yes | ||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-570 | Data-Science Enabled, Robust and Rapid MeV Ultrafast Electron Diffraction System to Characterize Materials Including for Quantum and Energy Applications | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Data-Science Enabled, Robust and apid MeV Ultrafast Electron Diffraction System to Characterize Materials Including for Quantum and Energy Applications | Data-Science Enabled, Robust and apid MeV Ultrafast Electron Diffraction System to Characterize Materials Including for Quantum and Energy Applications | Data-Science Enabled, Robust and apid MeV Ultrafast Electron Diffraction System to Characterize Materials Including for Quantum and Energy Applications | Data-Science Enabled, Robust and apid MeV Ultrafast Electron Diffraction System to Characterize Materials Including for Quantum and Energy Applications | ||||||||||||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-571 | GitHub Co-Pilot | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | This AI system does not provide a principle basis for decisions or actions with legal, material binding, or have any significat effects of those items listed in OMB-25-21 in the definition of High Impact. | Generative AI | quicker, more complete, and safer code development | quicker, more complete, and safer code development | text answers to input questions, code suggestions | 25/08/2025 | Purchased from a vendor | GitHub | Yes | text answers to input questions, code suggestions | Pretrained AI's with access to GitHub code repositiories | Not applicable | No | Not applicable | None of the above | No | Not applicable | Yes | Not applicable | Potential impacts were assessed by the Hanford AI SME | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | Direct usability testing |
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-572 | Use ML as part of an integrated strategy for forecasting renewable energy resources | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | This is an inderdisciplinary project aiming to develop a novel system that transforms the forecasting of renewal energy resources by seamlessly integrating a numerical weather prediction model, ML, and measurements. | This is an inderdisciplinary project aiming to develop a novel system that transforms the forecasting of renewal energy resources by seamlessly integrating a numerical weather prediction model, ML, and measurements. | This is an inderdisciplinary project aiming to develop a novel system that transforms the forecasting of renewal energy resources by seamlessly integrating a numerical weather prediction model, ML, and measurements. | This is an inderdisciplinary project aiming to develop a novel system that transforms the forecasting of renewal energy resources by seamlessly integrating a numerical weather prediction model, ML, and measurements. | ||||||||||||||||||||
| Department Of Energy | NA-IM (Enterprise) - Office of the Associate Administrator for Information Management and Chief Information Officer (HQ-Enterprise) (NNSA HQ) | DOE-573 | DNA-P Use Cases Leaverging Artificial Intellegence (Pre-Development) | Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | This use case is determined to be not high-impact base on definitoin in Section 5 of the OMB Memorandum M-25-21. | Other | "- identify data clusters/trend analysis - identify data discrepencies/data enrichment - generate suggestions (including generating reports, data linkages, and courses of action) - generate graphical and natural language analyses" | "-save time for DOE DNA-P users - improve DOE/NNSA data quality - improve DOE/NNSA safety operations" | "- data clusters - summaries - recommendations (data linkages, data entries, and courses of action) - graphical and natural language analyses - extracted entities -responses to queries via RAG workflows" | "- data clusters - summaries - recommendations (data linkages, data entries, and courses of action) - graphical and natural language analyses - extracted entities -responses to queries via RAG workflows" | ||||||||||||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-574 | M365 Copilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | b) Presumed high-impact, but determined not high-impact | Not high-impact | The M365 Copilot and it's variants are integrated into all LANL collaboration and productivity services within M365 including email and Teams. | Generative AI | Collaboration and Productivity improvements | Automation of routine tasks such as email drafting, meeting summaries, and document generation. Improved efficiency in the use of mundane tasks. | Contextual responses, action results, agentic orchestration for Copilot Studio, apply templates, and a host of outputs depending on the M365 App it is being used with. | 01/07/2025 | Purchased from a vendor | Microsoft | Yes | Contextual responses, action results, agentic orchestration for Copilot Studio, apply templates, and a host of outputs depending on the M365 App it is being used with. | Pre-trained on public and licensed data but NOT retrained in the GCC content. The latest trainded data for the LLM was October 2023. | No | None of the above | No | Yes | M365 Copilot and it's variants are collaboration, productivity, and coding tools that will provide efficiency and speed to the delivery of common work tasks. | In-Progress | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Not applicable | Other | ||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-575 | EDMS Admin | Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Used for records managment functions. | Generative AI | Machine-readable information is difficult to quickly summarize by humans. By enabling summaries of information in our electronic document management system we have a previously-unavailable capability that addresses this problem. | Reduces administrative burden through intelligent automation of routine tasks. | Information summaries | 02/09/2025 | Developed with both contracting and in-house resources | Not available | No | Information summaries | Not available | No | Yes | Not available | ||||||||||||
| Department Of Energy | IM-50 - Architecture Engineering Technology and Innovation (IM) | DOE-576 | EnerGPT Canvas | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | It does not meet the definition defined by OMB. | Generative AI | Improves the speed and accuracy DOE employees are able to draft, edit, and produce content. | Allowing DOE users to edit their projects using AI in one single platform. | New content, edits to existing content, code, etc. | 06/08/2025 | Developed with both contracting and in-house resources | Accenture Federal Services, Google | Yes | New content, edits to existing content, code, etc. | Google's Gemini family of models. | No | None of the above | No | ||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-577 | Microsoft Copilot | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-578 | Project Optimus - Prime Contract | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Used as a general/business chat agent for genAI. | Other | Not available | Improves contract lifecycle efficiency and compliance tracking. | Contract citations, abstracts and other summaries. | Developed with both contracting and in-house resources | Not available | No | Contract citations, abstracts and other summaries. | No | No | Yes | Not available | |||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-579 | MR-DT | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | b) Presumed high-impact, but determined not high-impact | Not high-impact | This deals with international safeguars approaches. | Classical/Predictive Machine Learning | Aid safeguards analysts for determining if a reactor is being used in a non-declared way. | Reduce the burden for inspectors by increasing synthesizing data and announcing anomalies. | Flag for when off-normal operations is detected along with expected material generated. | Flag for when off-normal operations is detected along with expected material generated. | ||||||||||||||||||||
| Department Of Energy | Y-12 - Consolidated Nuclear Security Y-12 (YFO) | DOE-58 | Raytheon Multimedia Monitoring System (M3S) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | No high-impact category applies | Natural Language Processing | Video transcript generation | Enable off-cloud transcription of pre-recorded video media to improve data discoverability | XML representation of the speech detected in a pre-recorded video | 18/01/2024 | Purchased from a vendor | Raytheon BBN Technologies | Yes | XML representation of the speech detected in a pre-recorded video | Knowledge Preservation Management (KPM) Media | No | None of the above | Yes | ||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-582 | Development of a Planning, Operation, and Control Framework for Hybrid Energy Storage and Renewable Generation Systems | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | One project: will develop the initial framework for planning, operation, and control of these 'hybrid' energy systems containing high penetrations of renewables together with energy storage, non-wires alternatives, and conventional generating resourc | One project: will develop the initial framework for planning, operation, and control of these 'hybrid' energy systems containing high penetrations of renewables together with energy storage, non-wires alternatives, and conventional generating resources by leveraging BNL's expertise in energy storage technologies, probabilistic-based planning and control solutions, and machine learning techniques. | One project: will develop the initial framework for planning, operation, and control of these 'hybrid' energy systems containing high penetrations of renewables together with energy storage, non-wires alternatives, and conventional generating resources by leveraging BNL's expertise in energy storage technologies, probabilistic-based planning and control solutions, and machine learning techniques. | One project: will develop the initial framework for planning, operation, and control of these 'hybrid' energy systems containing high penetrations of renewables together with energy storage, non-wires alternatives, and conventional generating resources by leveraging BNL's expertise in energy storage technologies, probabilistic-based planning and control solutions, and machine learning techniques. | ||||||||||||||||||||
| Department Of Energy | SLAC - SLAC National Accelerator Laboratory (SC43 OIM) | DOE-584 | GitHub Copilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | This AI system does not meet the criteria of any of the six pillars that make up High Impact AI in Memorandum M-25-21 | Generative AI | Enhancement of developer productivity | Cost savings and efficiency | Recommendations for code | 01/07/2025 | Purchased from a vendor | Microsoft | Yes | Recommendations for code | No SLAC Data is used to train the model. Only provided prompts for output | No | None of the above | No | ||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-585 | Accelerated Nanomaterial Discovery | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Historically the discovery and development of new materials has followed an iterative process of synthesis, measurement, and modeling, suitable integration of advanced characterization, robotics, and machine-learning provides an opportunity for radic | Historically the discovery and development of new materials has followed an iterative process of synthesis, measurement, and modeling, suitable integration of advanced characterization, robotics, and machine-learning provides an opportunity for radically accelerating the material design process. The CFN has an established record of discovering nanomaterials by applying new materials synthesis strategies, advanced characterization, and machine-learning. Integrating these efforts will enable autonomous platforms for iteratively exploring material parameter spaces, which have potential to revolutionize materials science by uncovering fundamental links between synthetic pathways, material structure, and functional properties. | Historically the discovery and development of new materials has followed an iterative process of synthesis, measurement, and modeling, suitable integration of advanced characterization, robotics, and machine-learning provides an opportunity for radically accelerating the material design process. The CFN has an established record of discovering nanomaterials by applying new materials synthesis strategies, advanced characterization, and machine-learning. Integrating these efforts will enable autonomous platforms for iteratively exploring material parameter spaces, which have potential to revolutionize materials science by uncovering fundamental links between synthetic pathways, material structure, and functional properties. | Historically the discovery and development of new materials has followed an iterative process of synthesis, measurement, and modeling, suitable integration of advanced characterization, robotics, and machine-learning provides an opportunity for radically accelerating the material design process. The CFN has an established record of discovering nanomaterials by applying new materials synthesis strategies, advanced characterization, and machine-learning. Integrating these efforts will enable autonomous platforms for iteratively exploring material parameter spaces, which have potential to revolutionize materials science by uncovering fundamental links between synthetic pathways, material structure, and functional properties. | ||||||||||||||||||||
| Department Of Energy | WAPA - Western Area Power Administration (PMA) | DOE-586 | GitHub Copilot | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI output does not serve as a principal basis for decisions or action with legal, material, binding or signficant effect on high impact areas. | Natural Language Processing | Improve the quality and speed of code development. | Speed the delivery of code development, edits and troublshooting. | Provides developers with possible code recommendations in appropriate format, identify code errors and suggestions to fix poor performing code. | 31/03/2024 | Purchased from a vendor | Microsoft | Yes | Provides developers with possible code recommendations in appropriate format, identify code errors and suggestions to fix poor performing code. | No agency content used for training. | No | No | Yes | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Establishment of sufficient and periodic training is in-progress | Yes | Not applicable | Direct usability testing | ||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-587 | DIRECTIVES | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | GenAI-RAG on DOE/NNSA content | Productivity Tool | Text | 01/10/2024 | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | SRS - SRNL - Savannah River Site - Savannah River National Laboratory (EM) | DOE-588 | AI-Assisted Strategies and Solutions for Environmental Technology (AI-ASSET) | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | b) Presumed high-impact, but determined not high-impact | Not high-impact | In development stage and impact has not been determined | Agentic AI | Enable rapid technology transfer of the ALTEMIS AI approaches to new sites and new systems through an automated data analysis and knowledge management toolkit | AI-assisted monitoring system design, generalized AIML contaminant forecasting framework, development of end state recommendations that account for site-specific environmental/technological/regulatory constraints, EM knowledge management | Knowledge graphs, analysis output | Knowledge graphs, analysis output | ||||||||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-589 | Google Agentspace / NotebookLM | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Unify enterprise data and enable large-scale agent deployment with Google Agentspace. | Improves enterprise knowledge use and team productivity at scale. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/07/2025 | Purchased from a vendor | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | Y-12 - Consolidated Nuclear Security Y-12 (YFO) | DOE-59 | Cognitive Prescreen Tool (CPT) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | No high-impact category applies | Classical/Predictive Machine Learning | Classification recommendations to assist the Derivative Classifier in making a document's overall classification determination. | Serve as a recommender to Derivative Classifier to assist with document review to improve process accuracy and efficiency, in that order. | Sensitive information detection bound to DOE classification guidance to help reduce IOSC and prevent information loss | 18/12/2019 | Developed in house | Yes | Sensitive information detection bound to DOE classification guidance to help reduce IOSC and prevent information loss | Classification Guides from the CNS Classification Office | No | None of the above | Yes | |||||||||||||
| Department Of Energy | NR HQ - Naval Reactors (NR) | DOE-590 | SpyglassGPT Chat Assistant | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Output of generative AI is administratively prohibited from being used as principal responses in the various scenarios considered high impact. | Generative AI | Aid in routine administrative tasks | Increased efficiency for routine tasks | Varies based on user prompts; it is a general purpose chatbot using government instances of popular Open AI models. | 18/08/2025 | Developed with both contracting and in-house resources | Microsoft | Yes | Varies based on user prompts; it is a general purpose chatbot using government instances of popular Open AI models. | OpenAI trained the models on a mix of publicly available, licensed, and open-source data, including text, code, images, and audio, with no proprietary or user data used without explicit permission. They were tested and aligned for safety and quality, using filtered and optimized subsets of data as appropriate for model size and capabilities. | No | No | https://github.com/open-webui/open-webui | ||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-591 | Custom GenAI for Advanced Reactor Development (LotusAI) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | a) High-impact | High-impact | Generative AI | Engineers spending extensive time manually iterating through documents during the development of the LOTUS Test Bed, and have to frequently answer user questions. | Engineers spending extensive time manually iterating through documents during the development of the LOTUS Test Bed, and have to frequently answer user questions. | Answers to Test Bed design and engineering questions with citations back to source documents. | 04/08/2025 | Developed with both contracting and in-house resources | Open source development | Yes | Answers to Test Bed design and engineering questions with citations back to source documents. | LOTUS Test Bed design and schematic information | Not available | No | Yes | Not available | ||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-592 | Use AI/ML for Climate Prediction | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Two projects: leverage AI/ML tools to synthesize uncertainty quantification-targeted, complex, multi-scale, multi-domain observations into high-resolution processmodels to characterize 4D variability in aerosol dynamics and evolution, which accounts | Two projects: leverage AI/ML tools to synthesize uncertainty quantification-targeted, complex, multi-scale, multi-domain observations into high-resolution processmodels to characterize 4D variability in aerosol dynamics and evolution, which accounts for large uncertainties in climate models; partial differential equation solving using machine learning for simulating the aerosol-cloud-precipitation system that is recognized to be the key in forecasting weather and climate change | Two projects: leverage AI/ML tools to synthesize uncertainty quantification-targeted, complex, multi-scale, multi-domain observations into high-resolution processmodels to characterize 4D variability in aerosol dynamics and evolution, which accounts for large uncertainties in climate models; partial differential equation solving using machine learning for simulating the aerosol-cloud-precipitation system that is recognized to be the key in forecasting weather and climate change | Two projects: leverage AI/ML tools to synthesize uncertainty quantification-targeted, complex, multi-scale, multi-domain observations into high-resolution processmodels to characterize 4D variability in aerosol dynamics and evolution, which accounts for large uncertainties in climate models; partial differential equation solving using machine learning for simulating the aerosol-cloud-precipitation system that is recognized to be the key in forecasting weather and climate change | ||||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-593 | Microsoft CoPilot | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Used for general purpose generative AI within a COTS product. | Generative AI | Generative AI tools like this solve the problem of time-consuming knowledge work by instantly providing expert-level assistance with writing, coding, research, and analysis. They automate repetitive tasks that require human expertise, allowing people | Boost employee productivity by helping to create content, assist in coding, and complex problem-solving tasks, while democratizing access to sophisticated capabilities like writing and design. This technology promises to accelerate innovation by serving as an intelligent collaborator, freeing humans to focus on higher-level strategic and creative work | The outputs of this tool is human-like text, code, and creative content that appears to be written by an expert. These tools produce coherent paragraphs, functional programming code, detailed analyses, creative stories, technical documentation, and conversational responses that are contextually relevant and professionally structured. | 03/07/2025 | Developed with both contracting and in-house resources | Microsoft | No | The outputs of this tool is human-like text, code, and creative content that appears to be written by an expert. These tools produce coherent paragraphs, functional programming code, detailed analyses, creative stories, technical documentation, and conversational responses that are contextually relevant and professionally structured. | Microsoft 365 Copilot is powered by large language models (LLMs) developed by OpenAI (e.g., GPT-4), which are trained on a broad corpus of publicly available data, licensed datasets, and Microsoft-curated content. This includes public web content, books, articles, and licensed third-party data | Not available | Yes | No | Not available | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-594 | M&O Program Trimester Reporting Modernization | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Modernize and automate trimester reporting to improve clarity and consistency. | Increases transparency and efficiency in program reporting. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | EE HQ - EE Headquarters (EE) | DOE-595 | AI and natural-language powered search | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not fall within the requirements for high impact. | Classical/Predictive Machine Learning | Solve records retrieval issues. | Efficienct retrieval of records. | Open Text will generate a search results report. | 02/01/2019 | Purchased from a vendor | OpenText | Yes | Open Text will generate a search results report. | Trained using existing internal EERE records currated by subject matter experts. | Yes | None of the above | No | ||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-596 | Microsoft Visual C++ Redistributable | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-597 | Offshore AIIM Dashboard | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Evaluate the integrity of offshore energy infrastructure (e.g., pipelines, platforms) in the U.S Gulf Region. | Evaluate integrity of infrastructure. | Data | Data | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-598 | EDX-ClaiMM | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Natural Language Processing | Address fundamental knowledge gaps and foster the innovation of new techniques for enhanced characterization and recovery of critical minerals and materials (CMMs) within the US. | Address fundamental knowledge gaps. | Data | Data | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-599 | Microsoft Teams | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | 01/02/2023 | Purchased from a vendor | Microsoft | No | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-60 | Machine Learning for Linac Improved Performance | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | In Linacs at FNAL and J-PARC, the current emittance optimization procedure is limited to manual adjustments of a few parameters; using a larger number is not practically feasible for a human operator. Using machine learning (ML) techniques allows li | Daily fluctuations in the Ion Source conditions as well af the effect of environmental changes to RF systems and cavities affect Linac beam. Results include increased beam loss resulting in increased beamline component irradiation, decreased beam intensity to downstream machines affecting Accelerator Complex deliverables, drifts in Linac beam energy directly affecting Booster losses. These drifts are not easily predictable since we do not have environmental control on the RF gallery, not enough instrumentaion in the Ion Source or Linac proper. To counter these effects, we are developing AI-based optimization and modeling, including Bayesian Optimization and surrogate model-based optimization, with the ultimate goal of (near) real-time RF compensation. | Outputs are proposed changes to RF system parameters (cavity phase settings and/or field gradients) to counter the effect of daily drift and to stabilize the output energy. | 25/09/2025 | Developed in house | No | Outputs are proposed changes to RF system parameters (cavity phase settings and/or field gradients) to counter the effect of daily drift and to stabilize the output energy. | Accelerator operations machine data as well as accelerator simulation | No | Yes | unknown | |||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-600 | Storage usage effectiveness and data placement optimization at Data Center | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | The goal of this project is to take data management for data centers to the next level by implementing Artificial Intelligence (AI) and Machine Learning (ML) to create a precise data use prediction model to aid important business and operational deci | The goal of this project is to take data management for data centers to the next level by implementing Artificial Intelligence (AI) and Machine Learning (ML) to create a precise data use prediction model to aid important business and operational decisions | The goal of this project is to take data management for data centers to the next level by implementing Artificial Intelligence (AI) and Machine Learning (ML) to create a precise data use prediction model to aid important business and operational decisions | The goal of this project is to take data management for data centers to the next level by implementing Artificial Intelligence (AI) and Machine Learning (ML) to create a precise data use prediction model to aid important business and operational decisions | ||||||||||||||||||||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-601 | xAI Grok Pilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | General chatbot knowledge and capabilities | Generative AI | Quick access to general knowledge. | Ability to quickly find and access general knowledge and business productivity | General answers to questions, summarized documentation, document creation, general research. | 30/06/2025 | Purchased from a vendor | xAI | No | General answers to questions, summarized documentation, document creation, general research. | Not involved in training | No | No | None of the above | No | N/A | In-Progress | Not applicable | Not applicable | Other | ||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-602 | Microsoft Search in Bing | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | WAPA - Western Area Power Administration (PMA) | DOE-603 | Microsoft Copilot (Pilot) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | AI output does not serve as a principal basis for decisions or action with legal, material, binding or signficant effect on high impact areas. | Natural Language Processing | Improve general office productivity | Content creation and drafting, data analysis and summerization, personalized learning and research | Content review recommendations, content summaries, how to instructions. | 01/08/2025 | Purchased from a vendor | Microsoft | Yes | Content review recommendations, content summaries, how to instructions. | No agency content used for training. | No | No | In-Progress | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Yes | Not applicable | In-Progress | ||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-604 | FindMATID | Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | Search for corporate material IDs | Maxmize use of Strategic Agreements | Material ID | 01/10/2024 | Developed in house | Yes | Material ID | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-606 | AI Enabled Code Review | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | a) High-impact | High-impact | Generative AI | Enhance application development lifecycle capabilities | Expidited code production | Code | Code | |||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-608 | Microsoft Teams Classic | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-609 | Custom GenAI for Advanced Reactor Development (RickoverAI) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | a) High-impact | High-impact | Generative AI | Reactor engineers spending extensive time manually iterating through documents during the reactor engineering process | Reactor engineers save time by using the AI to quickly find answers and relevant source documents. | Answers to reactor design and engineering questions with citations back to source documents. | 04/08/2025 | Developed with both contracting and in-house resources | Open source development with INL Advanced Analytics Center of Excellence | Yes | Answers to reactor design and engineering questions with citations back to source documents. | Advanced reactor engineering schematics | Not available | No | Yes | Not available | ||||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-61 | AI Denoising | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety, stratey,. | ||||||||||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-610 | Report Assistant | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | This use case does not significantly affect legal, material, or binding rights or critical access to services. | Classical/Predictive Machine Learning | Management spends excessive hours generating and reformatting reports for various customers/audiences this will reduce the learing curve and effort to translate information into the various formats, saving consideratble time and effort. | This will reduce the time and effort required to generate reports, allowing management to focus on more critical tasks. It is expected to halve the time spent on report generation in the first year. | The AI system will generate draft reports for review and finalization. | 01/10/2024 | Developed in house | The AI system will generate draft reports for review and finalization. | ||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-611 | AI-Tailored Learning Management Solutions | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | This AI use case focuses on enhancing training materials and does not significantly affect legal, material, or binding rights or critical access to services. | Generative AI | To improve the quality, relevance, and delivery of training materials, ensuring they are tailored and effective for users. | Improved training materials, dynamic learning tailored to user needs, real-time feedback, comprehensive evaluation, and higher staff effectiveness and readiness. | The AI system outputs training materials with dynamic questions and feedback, detailed performance reports, and personalized learning paths. | 01/10/2024 | Developed in house | SRNS - OT In House Staff | The AI system outputs training materials with dynamic questions and feedback, detailed performance reports, and personalized learning paths. | |||||||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-612 | Objective-Driven Data Reduction for Scientific Workflows | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | This project aims to develop theories and algorithms for objective-driven reduction of scientific data in workflows that are composed of various models, including data-driven AI models | This project aims to develop theories and algorithms for objective-driven reduction of scientific data in workflows that are composed of various models, including data-driven AI models | This project aims to develop theories and algorithms for objective-driven reduction of scientific data in workflows that are composed of various models, including data-driven AI models | This project aims to develop theories and algorithms for objective-driven reduction of scientific data in workflows that are composed of various models, including data-driven AI models | ||||||||||||||||||||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-613 | LivChat | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | General business process and policy knowledge. | Generative AI | Quick access to general policy and process knowledge. | Ability to quickly find and access internal process and procedure knowledge, and and business productivity | General answers to questions, summarized documentation, internal process and policy information. | 30/06/2025 | Developed in house | Yes | General answers to questions, summarized documentation, internal process and policy information. | Not involved in training | No | No | None of the above | Yes | N/A | Yes | Not applicable | Not applicable | Other | |||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-614 | Autonomous, real-time guiding of BCP film synthesis | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Adaptive synthesis and manufacturing processes will be realized by combining AI/ML methods for autonomous on-the-fly control of the deposition and processing of self-assembling polymer films. | Adaptive synthesis and manufacturing processes will be realized by combining AI/ML methods for autonomous on-the-fly control of the deposition and processing of self-assembling polymer films. | Adaptive synthesis and manufacturing processes will be realized by combining AI/ML methods for autonomous on-the-fly control of the deposition and processing of self-assembling polymer films. | Adaptive synthesis and manufacturing processes will be realized by combining AI/ML methods for autonomous on-the-fly control of the deposition and processing of self-assembling polymer films. | ||||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-615 | Ask IT | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | GenAI-RAG on IT content | Productivity Tool | Text | 01/10/2024 | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-616 | ServiceNow Now Assist (AskIT) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Used for Help Desk Function | Other | Service Desk operations | Improve the timeliness and quality of transactional service desk requests through improved incident search, resolution, feedback as well as AI-assisted coding of workflows. | Outputs are consistent with commercial service desk products, including the search, creation and closure of service requests. | Developed with both contracting and in-house resources | Not available | Yes | Outputs are consistent with commercial service desk products, including the search, creation and closure of service requests. | This use case will utilize service desk incident, problem and knowledge management sources for training and use. | Not available | Yes | Yes | Not available | ||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-617 | Low Dose Radiation Biology | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Predicting the effects of low dose radiation | Predicting the effects of low dose radiation | Predicting the effects of low dose radiation | Predicting the effects of low dose radiation | ||||||||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-618 | Claude Anthropic Enterprise | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Deliver secure enterprise AI with Claude's large-context models and code generation. | Offers secure, large-context AI tools for research and analysis. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/08/2025 | Purchased from a vendor | Anthropic | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-619 | KBase: An Integrated Knowledgebase for Predictive Biology and Environmental Research | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Building knowledgebase for systems biology and enabling the predictive analysis using web interface | Building knowledgebase for systems biology and enabling the predictive analysis using web interface | Building knowledgebase for systems biology and enabling the predictive analysis using web interface | Building knowledgebase for systems biology and enabling the predictive analysis using web interface | ||||||||||||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-62 | Next-Generation Beam Cooling and Control with Optical Stochastic Cooling | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Reinforcement Learning | This program leverages the physics and technology of optical stochastic cooling (OSC) to explore new possibilities in beam control and sensing. The planned architecture and performance of a new OSC system at IOTA should enable turn-by-turn programma | This effort focuses on enhanced real-time control of the structure of circulating particle beams. The additional performance and capabilities provided may enable substantially greater operational flexibility and science reach at current and future DOE accelerator facilities. | The AI system will continuously infer the state of a circulating beam distribution and then use this inference in the execution of an RL-based control policy. The primary means of control is an advanced optical stochastic cooling system. | No | The AI system will continuously infer the state of a circulating beam distribution and then use this inference in the execution of an RL-based control policy. The primary means of control is an advanced optical stochastic cooling system. | Large-scale simulation data is being used to train the diagnostic and control systems. Online training with experimental data may also be leveraged once the system is operational. | No | Yes | ||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-620 | AI Video Creation | Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Doesn't meet the criteria. | Generative AI | Enhance internal communications and streamline Learning and Development | Tool that lab staff can use to facilitate creation of generative AI enabled media | Multi-modal media output | Multi-modal media output | ||||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-621 | ChatGPT | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Used for general purpose generative AI within a COTS product. | Generative AI | Generative AI tools like this solve the problem of time-consuming knowledge work by instantly providing expert-level assistance with writing, coding, research, and analysis. They automate repetitive tasks that require human expertise, allowing people | Boost employee productivity by helping to create content, assist in coding, and complex problem-solving tasks, while democratizing access to sophisticated capabilities like writing and design. This technology promises to accelerate innovation by serving as an intelligent collaborator, freeing humans to focus on higher-level strategic and creative work | The outputs of this tool is human-like text, code, and creative content that appears to be written by an expert. These tools produce coherent paragraphs, functional programming code, detailed analyses, creative stories, technical documentation, and conversational responses that are contextually relevant and professionally structured. | 25/06/2025 | Developed with both contracting and in-house resources | Open AI | No | The outputs of this tool is human-like text, code, and creative content that appears to be written by an expert. These tools produce coherent paragraphs, functional programming code, detailed analyses, creative stories, technical documentation, and conversational responses that are contextually relevant and professionally structured. | "ChatGPT (including GPT-4 and GPT-5) is trained on a large and diverse corpus of publicly available and licensed data. This includes: public internet text (websites, articles, forums, books); licensed datasets from publishers and providers; data created by human trainers to refine performance. Importantly, ChatGPT is not trained on proprietary or private company data." | Not available | No | No | Not available | |||||||||||
| Department Of Energy | EE HQ - EE Headquarters (EE) | DOE-622 | Enhancing the circularity: Cost effective battery de-energization, disassembly, and pre-processing (CEBDDP) | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Will serve as basis for decisions on use of end-of-first life batteries | Classical/Predictive Machine Learning | Predict state of health and state of function of spent batteries | Improved ability for prediction to enable potential reuse of batteries, towards reducing costs of batteries | Prediction of battery state of health | Prediction of battery state of health | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-623 | Microsoft MSPaint | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-624 | AGN-201 DT | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | This deals with international safeguards approaches. | Classical/Predictive Machine Learning | Aid safeguards analysts for determining if a reactor is being used in a non-declared way. | Reduce the burden for inspectors by increasing synthesizing data and announcing anomalies. | Flag for when off-normal operations is detected along with expected material generated. | Developed with both contracting and in-house resources | Not available | No | Flag for when off-normal operations is detected along with expected material generated. | Not available | Not available | No | No | Not available | ||||||||||||
| Department Of Energy | Pantex - PanTeXas Deterrence Pantex (PFO) | DOE-625 | Intrabot | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | The AI output is not a basis for decisions or actions. It is for information retrieval. | Generative AI | The AI is intended to improve efficiency by providing plant personnel with a faster way to access information on the Pantex Intranet. | The expected benefit is increased effiency for plant personnel. | Questions posed by plant personnel are answered by leveraging the knowledge corpus and providing answers in natural language text and links to relevent Intranet pages. | Developed in house | No | Questions posed by plant personnel are answered by leveraging the knowledge corpus and providing answers in natural language text and links to relevent Intranet pages. | ||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-626 | AI Drafting of Operational Procedures and Training Materials | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | This AI use case does not significantly affect legal, material, or binding rights or critical access to services. | Generative AI | "To automate and enhance the drafting of operational procedures and training materials, reducing time, effort, and errors." | "Reduced development time, reduced errors, and rework, better context understanding, and improved standardization and formatting, cutting time spent on document creation by half." | "The AI system outputs draft operational procedures and training materials which need to be reviewed and corrected by users." | 01/10/2024 | Developed in house | SRNS - OT In House Staff | "The AI system outputs draft operational procedures and training materials which need to be reviewed and corrected by users." | |||||||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-627 | lntegrated Platform for Multimodal Data Capture, Exploration and Discovery Driven by Al Tools | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | This project will enable and accelerate scientific discovery by leveraging large complex multimodal datasets generated at BES user facilities, develop shared transferrable infrastructure to store, curate, analyze and disseminate the data. Additional | This project will enable and accelerate scientific discovery by leveraging large complex multimodal datasets generated at BES user facilities, develop shared transferrable infrastructure to store, curate, analyze and disseminate the data. Additionally, we will build data analysis tools that reveal correlations in multimodal data and apply Machine Learning (ML) methods and train artificial Intelligence (AI) models that efficiently extract synergistic physical information and embed such models in new workflows for rapid scientific discovery. | This project will enable and accelerate scientific discovery by leveraging large complex multimodal datasets generated at BES user facilities, develop shared transferrable infrastructure to store, curate, analyze and disseminate the data. Additionally, we will build data analysis tools that reveal correlations in multimodal data and apply Machine Learning (ML) methods and train artificial Intelligence (AI) models that efficiently extract synergistic physical information and embed such models in new workflows for rapid scientific discovery. | This project will enable and accelerate scientific discovery by leveraging large complex multimodal datasets generated at BES user facilities, develop shared transferrable infrastructure to store, curate, analyze and disseminate the data. Additionally, we will build data analysis tools that reveal correlations in multimodal data and apply Machine Learning (ML) methods and train artificial Intelligence (AI) models that efficiently extract synergistic physical information and embed such models in new workflows for rapid scientific discovery. | ||||||||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-628 | Improve Scout Search results | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Improve enterprise search with natural language prompts and personalization. | Provides faster, more accurate access to enterprise knowledge. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | SRS - SRNL - Savannah River Site - Savannah River National Laboratory (EM) | DOE-629 | Consolidated Nuclear Waste Glass Database | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | b) Presumed high-impact, but determined not high-impact | Not high-impact | In development stage and impact has not been determined | Classical/Predictive Machine Learning | Incorporation of several physics-driven machine learning models to predict the properties of nuclear waste glass compositions – in addition, bootstrap other glass computational science models such as GlassPy and GlassNet to the database | Develop an opensource, online database consisting of property information for nuclear waste glass data generated by various national laboratories over several decades | Textual output; Chemical glass composition | Textual output; Chemical glass composition | ||||||||||||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-63 | In-storage computing for multi-messenger astronomy in neutrino experiments and cosmological surveys | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project aims to address the big-data challenges and stringent time constraints facing multi-messenger astronomy (MMA) in neutrino experiments and cosomological surveys. Instead of following the traditional computing paradigm of moving data to th | The purpose is to enhance the ability of large scale neutrino experiments like DUNE to detect neutrinos from core-collapse supernovas (CCSNs) and to extract useful information about their source in real time to provide prompt multi-messenger alerts to other observatories. Aside from enabling prompt SN pointing that is also precise, this will cut down the rate of fake SN triggers (curretnly estimated at ~1/month) and therefore offer potential savings from a reduction in the hardware resources required for storing the large amounts of data associated with CCSN candidates. | The output of the AI system is a set of predictions which will be used as the basis for a drastic reduction in the amount of data to be fed to the next stage involving reconstruction and analysis. Before feeding the data to this stage, the AI system will also perform preprocessing operations such as noise removal to facilitate and speed up subsequent data processing. | No | The output of the AI system is a set of predictions which will be used as the basis for a drastic reduction in the amount of data to be fed to the next stage involving reconstruction and analysis. Before feeding the data to this stage, the AI system will also perform preprocessing operations such as noise removal to facilitate and speed up subsequent data processing. | Simulated data closely approximating real-world raw detector data expected from CCSNs is used to train and validate the ML models used in the data reduction and preprocessing pipeline. | No | Yes | ||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-630 | HR Job Postings | Pre-deployment – The use case is in a development or acquisition status. | Human Resources | Pre-deployment | a) High-impact | High-impact | Not available | Generative AI | Generic or ineffective job postings | Enhances applicant diversity and job match quality by optimizing language and structure in postings, improving recruitment outcomes. | Improved job postings | 01/12/2025 | Developed in house | Yes | Improved job postings | Not available | No | Yes | Not available | |||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-631 | Visual Studio Community | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | 01/02/2023 | Purchased from a vendor | Microsoft | No | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-632 | Invoice Scanning System | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | This AI system does not provide a principle basis for decisions or actions with legal, material binding, or have any significat effects of those items listed in OMB-25-21 in the definition of High Impact. | Classical/Predictive Machine Learning | Missing information in subcontract submittals | greater accuracy of information on required forms | Analytics | 04/09/2024 | Developed in house | Yes | Analytics | Subcontractor Database | Not applicable | No | Not applicable | None of the above | Yes | Not applicable | Yes | Not applicable | Impacts were assessed by the software owner and developer | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-633 | Facilities Visual Inspection | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Computer Vision | Detect hazards and assess facility conditions through AI-enhanced visual inspections. | Improves workplace safety and hazard detection in facilities. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-634 | Microsoft OneDrive | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-635 | QuantomVision | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | a) High-impact | High-impact | Generative AI | Workforce upskilling | Predict workforce evolution and analiyze skills mapping. | Predict workforce evolution and analiyze skills mapping. | Open AI | Predict workforce evolution and analiyze skills mapping. | na | |||||||||||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-637 | ServiceNow Cluster Analysis | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | ServiceNow's AI output is NOT a principal driver of a government decision that would meaningfully affects people's rights, safety, critical access, or strategic assets. AI in ServiceNow is strictly used to help manage IT resources for LANL. | Classical/Predictive Machine Learning | Identify patterns of tickets created to determine workflow and automated solutions | Improved automation | Recommendation | 01/11/2024 | Purchased from a vendor | ServiceNow | Yes | Recommendation | Existing LANL IT ticket data stored in GCC Data Center certified as FedRAMP High. Training servers in same environment. | No | None of the above | No | Yes | Minimal impact as AI is used purely for predictive purposes to aid trained technicians. Humans remain the decision makers whether to use the AI's output. | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Yes, an appropriate appeal process has been established | Other | ||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-638 | ChatSRS | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | General text-oriented chat (e.g., summarization, generation) | Productivity Tool | Text | 01/10/2024 | Developed in house | Yes | Text | None | No | None of the above | No | Yes | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-64 | high level synthesis for machine learning (previously hls4ml) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project develops hardware-software AI codesign tools for FPGAs and ASICs for algorithms running at the extreme edge. | hls4ml is used to implement specialized AI algorithms in embedded hardware. This is valuable across a wide range of scientific applications for enabling real-time processing capabilitles. This can accelerate sientific discovery and time to science thus enabling large cost savings and DOE scientific prestige. | It can be an AI algorithm from prediction to data compression to control (decision making). | 25/09/2025 | Developed in house | No | It can be an AI algorithm from prediction to data compression to control (decision making). | research datasets from scientific experiments | No | Yes | unknown | |||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-640 | Knowledge Capture Agent (KCA) | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | a) High-impact | High-impact | Generative AI | Capturing tenured employees experience based knowledge. | To pass down this experience and make it into a database | Feeding a database that can later be querired by newcomers. | Feeding a database that can later be querired by newcomers. | |||||||||||||||||||||
| Department Of Energy | EE HQ - EE Headquarters (EE) | DOE-641 | AI Support Agent Chatbot | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Provides advisory information only; does not make binding decisions. | Generative AI | Manual helpdesk tickets consume staff time and delay issue resolution. | Reduce Tier-1 support time, improve satisfaction with 24/7 responses, free staff for complex issues. | Conversational responses, step-by-step guidance, and links to support documentation. | Conversational responses, step-by-step guidance, and links to support documentation. | ||||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-642 | UNSPSC Codes | Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | Map requisition items to UNSPSC codes | Maxmize use of Strategic Agreements and facilitate SCM reporting | UNSPSC Codes | 01/10/2024 | Developed in house | Yes | UNSPSC Codes | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-643 | ServiceNow Similarity Analysis | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | ServiceNow's AI output is NOT a principal driver of a government decision that would meaningfully affects people's rights, safety, critical access, or strategic assets. AI in ServiceNow is strictly used to help manage IT resources for LANL. | Classical/Predictive Machine Learning | Identify patterns of tickets created to determine workflow and automated solutions | Improved automation | Recommendation | 01/11/2024 | Purchased from a vendor | ServiceNow | Yes | Recommendation | Existing LANL IT ticket data stored in GCC Data Center certified as FedRAMP High. Training servers in same environment. | No | None of the above | No | Yes | Minimal impact as AI is used purely for predictive purposes to aid trained technicians. Humans remain the decision makers whether to use the AI's output. | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Yes, an appropriate appeal process has been established | Other | ||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-644 | Accelerating HEP Science: Inference and Machine Learning at Extreme Scales | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Developing Galaxy Image deblending with the gravitational lensing effects and scaling AI/ML algorithms | Developing Galaxy Image deblending with the gravitational lensing effects and scaling AI/ML algorithms | Developing Galaxy Image deblending with the gravitational lensing effects and scaling AI/ML algorithms | Developing Galaxy Image deblending with the gravitational lensing effects and scaling AI/ML algorithms | ||||||||||||||||||||
| Department Of Energy | LM HQ - Office of Legacy Management (LM) | DOE-645 | Georeference Figures | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | This AI use case does not serve as a principal basis for decisions or actions with legal, material, and binding | Classical/Predictive Machine Learning | AI-powered georeferencing will improve automation, speed, and efficiency. Manual georeferencing is expensive. | Georeferenced of historical paper maps. | AI outputs georeferenced vector file which will be evaluated by humans | 01/03/2025 | Purchased from a vendor | Tesseract | Yes | AI outputs georeferenced vector file which will be evaluated by humans | Output is compared with our aerial baseline. | No | None of the above | No | In-Progress | Improved management of the remedy | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Yes | Not applicable | Direct usability testing | ||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-646 | Microsoft Discovery | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Speed up transparent, governed R&D discovery with AI agents and graph knowledge engines. | Accelerates innovation with transparent, AI-driven R&D processes. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | Microsoft | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | Pantex - PanTeXas Deterrence Pantex (PFO) | DOE-647 | CyberSearch | Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | c) Not high-impact | Not high-impact | The AI output is not a basis for decisions or actions. It is for information retrieval. | Generative AI | The AI is intended to improve the efficiency of plant personnel by giving them an easy way to access and search cybersecurity documents. | The expected benefit is increased effiency for plant personnel. | Questions posed by plant personnel are answered by leveraging the knowledge corpus and providing answers in natural language text and links to relevent documents. | Developed in house | No | Questions posed by plant personnel are answered by leveraging the knowledge corpus and providing answers in natural language text and links to relevent documents. | ||||||||||||||||||
| Department Of Energy | SLAC - SLAC National Accelerator Laboratory (SC43 OIM) | DOE-648 | O365 Copilot | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | This AI system does not meet the criteria of any of the six pillars that make up High Impact AI in Memorandum M-25-21 | Generative AI | Improve day-to-day business functionality | Automation of redundant business functions to increase efficiency | Recommendations to provide insight for better decision making, scheduling, notetaking, and data analysis | Yes | Recommendations to provide insight for better decision making, scheduling, notetaking, and data analysis | |||||||||||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-65 | Streamining intelligent detectors for sPHENIX/EIC | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | his project develops real-time algorithms for event filtering with tracking detectors for nuclear physics collider experiments. | AI tools are developed for embedded inference in real-time processing systems for scientific experiments such as sPHENIX and upcoming EIC. This can accelerate sientific discovery and time to science thus enabling large cost savings and DOE scientific prestige. | It can be an AI algorithm from prediction to data compression to control (decision making). | No | It can be an AI algorithm from prediction to data compression to control (decision making). | research datasets from scientific experiments | No | Yes | ||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-650 | Critical Materials | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Generative AI | AI will make discoveries of unknown combinations of rare earth elements and ligands used in critical materials that are currently impossible to separate | Make discoveries of combinations of rare earth elements and ligands used in critical materials efficiently, faster, less costly | New datasets and benchmarks | New datasets and benchmarks | |||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-651 | Microsoft 365 | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-652 | Advanced Fuels Campaign | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | a) High-impact | High-impact | Other | Not available | Accelerates fuel development cycles and improves performance predictions, reducing R&D costs and time to deployment. | Information summaries | Developed with both contracting and in-house resources | Not available | No | Information summaries | Not available | No | Yes | Not available | ||||||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-653 | Hanford Ai Liaison (HAL) 2.0 | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | This AI system does not provide a principle basis for decisions or actions with legal, material binding, or have any significat effects of those items listed in OMB-25-21 in the definition of High Impact. | Generative AI | connecting business data streams to AI for increased analysis and work efficiency | cost savings, increased efficiency, increased productivity, greater analytics of data | text answers to input questions | 01/09/2025 | Developed in house | Yes | text answers to input questions | Pre-trained from OpenAI with access to Hanford specific data sources (search, popfon, and ESP) | Not applicable | No | Not applicable | None of the above | Yes | Not applicable | Yes | Not applicable | Potential impacts were assessed by the Hanford AI SME | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-654 | ATLAS | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Used for technology deployment function. | Other | Make processes in Technology Deployment more effective and efficient | Enables faster data discovery and analysis across large datasets, improving research productivity and insight generation. | Information summaries, fact sheets, marketing guides, propasal drafts, and categorization guidance | No | Information summaries, fact sheets, marketing guides, propasal drafts, and categorization guidance | Currently utilizing Azure OpenAI and HPC LLM's. Intending to utilize RAG model for DOE Tech Transfer specific information. | No | No | Yes | |||||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-655 | Continuous Structure Descriptors for XANES Interpretation | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Seeking a continuous local structure motif that correlates to X-ray spectral signatures | Seeking a continuous local structure motif that correlates to X-ray spectral signatures | Seeking a continuous local structure motif that correlates to X-ray spectral signatures | Seeking a continuous local structure motif that correlates to X-ray spectral signatures | ||||||||||||||||||||
| Department Of Energy | NR HQ - Naval Reactors (NR) | DOE-656 | Azure AI Search | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Azure AI Search indexes data. The end user is responsible for the verification of the data and the final use. This does not meet the criteria outlined in section 5 of OMB Memorandum M-25-21. | Natural Language Processing | Enhance the capability to find structured and unstructured data within databases and datalakes. | Improve efficiency of finding a data source. Azure AI Search also creates an internal knowledge base for use in a LLM. | AI Search creates a vectorized database. | AI Search creates a vectorized database. | ||||||||||||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-657 | ServiceNow Now Assist | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | ServiceNow's AI output is NOT a principal driver of a government decision that would meaningfully affects people's rights, safety, critical access, or strategic assets. AI in ServiceNow is strictly used to help manage IT resources for LANL. | Agentic AI | Provide Agentic AI capabilities for use in multiple use cases for the IT divisions at LANL | Improved automation | Recommendation | 01/11/2024 | Purchased from a vendor | ServiceNow | Yes | Recommendation | Existing LANL IT ticket data stored in GCC Data Center certified as FedRAMP High. Training servers in same environment. | No | None of the above | No | Yes | Minimal impact as AI is used purely for predictive purposes to aid trained technicians. Humans remain the decision makers whether to use the AI's output. | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Yes, an appropriate appeal process has been established | Other | ||||
| Department Of Energy | ANL - Argonne National Laboratory (SC43 OIM) | DOE-658 | pComply-AI-High Risk | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Impact of system failure or erroneous results is negligible. | Classical/Predictive Machine Learning | Preemptive procurement compliance: daily scans of recent PARIS requisition records. | Developed as a rapid response for a corrective action to eliminate incidents when high-risk materials are procured inadvertently without the required review processes and handling. | Solution features an interactive report dashboard and automatic email notifications to persons involved with the requisition when a high-risk scenario is identified. | 01/01/2024 | Developed in house | Yes | Solution features an interactive report dashboard and automatic email notifications to persons involved with the requisition when a high-risk scenario is identified. | ANL Operational data. | No | No | Yes | Not applicable | Yes | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Direct usability testing | |||||||
| Department Of Energy | LM HQ - Office of Legacy Management (LM) | DOE-659 | Groundwater Modeling | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | This AI use case does not serve as a principal basis for decisions or actions with legal, material, and binding | Classical/Predictive Machine Learning | Solves critical challenges in water modeling by providing forward-looking monitoring. | Forward cast ground water behavior | AI outputs is a ground water model and that model will be evaluated by humans. | 02/01/2003 | Purchased from a vendor | PEST | Yes | AI outputs is a ground water model and that model will be evaluated by humans. | Data is held back from the models to validate model outputs. | No | None of the above | No | Yes | Improved management of the remedy | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Yes | Not applicable | Direct usability testing | ||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-66 | In-pixel AI for future tracking detectors | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project explores novel AI-on-chip technology for intelligent detectors embedded with sensing technology | AI algorithms are implemented in on-detector electronics in order to reduce data size and enable processing at high rates. | A recommendation of whether to save data based on AI classifier. Or, a fast inference of track parameters to be used for fast selection | 25/09/2025 | Developed in house | No | A recommendation of whether to save data based on AI classifier. Or, a fast inference of track parameters to be used for fast selection | Accelerator operations machine data | No | Yes | unknown | |||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-660 | Instrument Documentation Search | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Provide quick answers about instrument operations by searching ingested manuals. | Speeds troubleshooting and learning of lab instruments. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-661 | MOOSE-LLM | Pilot – The use case has been deployed in a limited test or pilot capacity. | Energy & the Environment | Pilot | c) Not high-impact | Not high-impact | Focusing on the application of generative AI on modeling and simulation tasks | Generative AI | Tools like MOOSE for multiphysics modeling have a steep learning curve. They demand considerable domain-specific knowledge, which makes it hard for newcomers to get started. | Improves user experience and reduces training time by enabling natural language assistance within the MOOSE simulation framework. | Improved documentation, inputfile completion, convergence analysis | 26/08/2025 | Developed in house | Open source | No | Improved documentation, inputfile completion, convergence analysis | This use case uses open source code documentation, open source large language models and retrivial argurment generation to build a MOOSE modeling and simulation AI assistant | yes | No | Yes | under SDR process, currently under INL gitlab https://hpcgitlab.hpc.inl.gov/idaholab/moosenger | MOOSENger saves times, reduces errors while building MOOSE multiphysics model, streamline the modeling and simulation workflow | ||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-662 | DevSec Ops AI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Used for general purpose IT M&O. | Other | Not available | accelerates secure software delivery | Information summaries | Developed with both contracting and in-house resources | Not available | No | Information summaries | No | No | Yes | Not available | |||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-663 | Machine Learning for Autonomous Control of Scientific User Facilities | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | BNL will work alongside SLAC, to implement ML algorithm(s) into NSLS-II Operations to interpret accelerator data more intelligently. We intend to train said algorithms with 5+ years of archived device-data from accelerator components, records of pre | BNL will work alongside SLAC, to implement ML algorithm(s) into NSLS-II Operations to interpret accelerator data more intelligently. We intend to train said algorithms with 5+ years of archived device-data from accelerator components, records of previous fault causes (to connect to data-symptoms) and stored beam current. | BNL will work alongside SLAC, to implement ML algorithm(s) into NSLS-II Operations to interpret accelerator data more intelligently. We intend to train said algorithms with 5+ years of archived device-data from accelerator components, records of previous fault causes (to connect to data-symptoms) and stored beam current. | BNL will work alongside SLAC, to implement ML algorithm(s) into NSLS-II Operations to interpret accelerator data more intelligently. We intend to train said algorithms with 5+ years of archived device-data from accelerator components, records of previous fault causes (to connect to data-symptoms) and stored beam current. | ||||||||||||||||||||
| Department Of Energy | Pantex - PanTeXas Deterrence Pantex (PFO) | DOE-664 | Preventive Maintenance Procedure Development | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | a) High-impact | High-impact | Generative AI | The AI is intended to solve the inefficiency and potential for error in manually developing and updating preventive maintenance procedures. It addresses the challenge of synthesizing information from multiple, diverse data sources to ensure complianc | The expected benefits include, increased productivity, reduced manual research time, minimized errors, improved compliance, enhanced equipment uptime, and cost savings. | The system's outputs are comprehensive, compliant, and detailed preventive maintenance procedures for facilities and equipment. | Developed in house | The system's outputs are comprehensive, compliant, and detailed preventive maintenance procedures for facilities and equipment. | ||||||||||||||||||||
| Department Of Energy | EE HQ - EE Headquarters (EE) | DOE-665 | OpenText for Records Management (File share auto-classification) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not fall within the requirements for high impact. | Classical/Predictive Machine Learning | Records retention and disposition. | Reduce time required to classify legacy records accumulated over 20 years. | OpenText will categorize each file located on the network drives. | 24/10/2022 | Purchased from a vendor | OpenText | Yes | OpenText will categorize each file located on the network drives. | Trained using existing internal EERE records currated by subject matter experts. | Yes | None of the above | No | ||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-666 | Towards Edge Computing: A Software and Hardware Co-Design Methodology for Application-Specific Integrated Circuit (ASIC)-based Scientific Neuromorphic Computing (NC) | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | One project: Current deep neural network (DNN)-based Artificial Intelligence (AI) algorithms have already been successfully applied to particle physics applications. This project will develop a co-design approach for methodologies and their implement | One project: Current deep neural network (DNN)-based Artificial Intelligence (AI) algorithms have already been successfully applied to particle physics applications. This project will develop a co-design approach for methodologies and their implementation of edge computing for optimal handling of data streams | One project: Current deep neural network (DNN)-based Artificial Intelligence (AI) algorithms have already been successfully applied to particle physics applications. This project will develop a co-design approach for methodologies and their implementation of edge computing for optimal handling of data streams | One project: Current deep neural network (DNN)-based Artificial Intelligence (AI) algorithms have already been successfully applied to particle physics applications. This project will develop a co-design approach for methodologies and their implementation of edge computing for optimal handling of data streams | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-667 | Microsoft Visual C++ Minimum Runtime | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-668 | AI/ML for Applications in High Energy adn Nuclear Physics | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Develop state-of-the-art cycle-consistent GANs to bridge the gap between simulations and experimental data; develop real-time particle tracking with deep learning on field programmable gate arrays; explore the challenges of deploying ML modeling onto | Develop state-of-the-art cycle-consistent GANs to bridge the gap between simulations and experimental data; develop real-time particle tracking with deep learning on field programmable gate arrays; explore the challenges of deploying ML modeling onto real-time inference hardware - for High Energy or Nuclear Physics | Develop state-of-the-art cycle-consistent GANs to bridge the gap between simulations and experimental data; develop real-time particle tracking with deep learning on field programmable gate arrays; explore the challenges of deploying ML modeling onto real-time inference hardware - for High Energy or Nuclear Physics | Develop state-of-the-art cycle-consistent GANs to bridge the gap between simulations and experimental data; develop real-time particle tracking with deep learning on field programmable gate arrays; explore the challenges of deploying ML modeling onto real-time inference hardware - for High Energy or Nuclear Physics | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-669 | Microsoft OneDrive MUI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-67 | SONIC: AI acceleration as a service | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project focuses on integration of AI hardware for at-scale inference acceleration for particle physics experiments. | SONIC is used to accelerate AI workloads on coprocessors in scientific experiments. This can accelerate sientific discovery and time to science thus enabling large cost savings and DOE scientific prestige. | It can be an AI algorithm from prediction to data compression to control (decision making). | 25/09/2025 | Developed in house | No | It can be an AI algorithm from prediction to data compression to control (decision making). | research datasets from scientific experiments | No | Yes | unknown | |||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-670 | RAPIDS3: A SciDAC Institute for Computer Science, Data, and Artificial Intelligence | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | A SciDAC computer science institute and BNL is co-leading AI team | A SciDAC computer science institute and BNL is co-leading AI team | A SciDAC computer science institute and BNL is co-leading AI team | A SciDAC computer science institute and BNL is co-leading AI team | ||||||||||||||||||||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-671 | LISA Chatbot Pilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Not being used to make critical runtime decisions. | Generative AI | Quick access to specific mission data sets and documentation | Ability to quickly find and access relevant mission data and experiment documentation | Specific, data-driven answers to mission science questions | 30/06/2025 | Developed with both contracting and in-house resources | AWS | No | Specific, data-driven answers to mission science questions | Mission science data and dcoumentation | No | No | None of the above | Yes | N/A | In-Progress | Not applicable | Not applicable | Other | ||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-672 | Use AI/ML to Enhance the Bioimaging Capabilities at Brookhaven National Laboratory (BNL) | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Three projects: use expertise in ML as one element in building an integrated multiscale bioimaging capability at BNL; use AI/ML tools use to accelerate the analysis of protein structure and function; develop a high-resolution AI-led analysis of nondi | Three projects: use expertise in ML as one element in building an integrated multiscale bioimaging capability at BNL; use AI/ML tools use to accelerate the analysis of protein structure and function; develop a high-resolution AI-led analysis of nondisrputive, time-resolved light microscopy images measured at lower spatial resolutions | Three projects: use expertise in ML as one element in building an integrated multiscale bioimaging capability at BNL; use AI/ML tools use to accelerate the analysis of protein structure and function; develop a high-resolution AI-led analysis of nondisrputive, time-resolved light microscopy images measured at lower spatial resolutions | Three projects: use expertise in ML as one element in building an integrated multiscale bioimaging capability at BNL; use AI/ML tools use to accelerate the analysis of protein structure and function; develop a high-resolution AI-led analysis of nondisrputive, time-resolved light microscopy images measured at lower spatial resolutions | ||||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-673 | Climate Weather Data | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Targeted for weather and climate related use cases | Other | Not available | speeds up climate model analysis | Not available | Developed with both contracting and in-house resources | Not available | No | Not available | Not available | No | Yes | Not available | |||||||||||||
| Department Of Energy | LM HQ - Office of Legacy Management (LM) | DOE-674 | Scripting | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | This AI use case does not serve as a principal basis for decisions or actions with legal, material, and binding | Natural Language Processing | AI-powered scripting will improve automation, speed, and efficiency. | Reduced cost in alteration of ground water models improving ground water outcomes. | Python code for use in models | 30/01/2025 | Purchased from a vendor | Google Gemini | No | Python code for use in models | Data is held back from the models to validate model outputs. | No | None of the above | Yes | In-Progress | Improved management of the remedy | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Yes | Not applicable | Direct usability testing | ||||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-676 | Microsoft Copilot Pilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | General chatbot knowledge and business productivity capabilities | Generative AI | Quick access to self-generated (Documents, Emails, Files) knowledge and business productivity. | Ability to quickly find and access general knowledge and business productivity | General answers to questions, summarized documentation, document creation, general research. | 30/06/2025 | Purchased from a vendor | Microsoft | No | General answers to questions, summarized documentation, document creation, general research. | Not involved in training | No | No | None of the above | No | N/A | In-Progress | Not applicable | Not applicable | Other | ||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-677 | EES&T Document Processing | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Used for general purpose use cases within EES&T organization | Other | Not available | Cuts manual processing time and improves data accessibility by automating document classification and extraction, increasing operational efficiency. | Information summaries | Developed with both contracting and in-house resources | Not available | No | Information summaries | No | No | Yes | Not available | |||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-678 | Smart CO2 Transport-Route Planning Tool | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Identify potential routes or evaluate existing corridors for carbon transport based on current legislation, best construction practices, and more. | Inform planning and development | Data | Data | ||||||||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-679 | PermitAI | Pilot – The use case has been deployed in a limited test or pilot capacity. | Energy & the Environment | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Agentic AI | Streamline permitting processes with AI-powered environmental review tools. | Speeds up Federal permitting while improving transparency. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | Yes | ||||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-68 | High-Velocity AI: Generative Models | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | ||||||||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-681 | AI for Vendor Compliance | Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Doesn't meet criteria. | Generative AI | Making sure vendors are in compliance with regulations. | More streamlined risk mitigation process with vendors. | Improved consistency in vendor selection and retention | Improved consistency in vendor selection and retention | ||||||||||||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-682 | ServiceNow LANL AI Portal Integration | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | ServiceNow's AI output is NOT a principal driver of a government decision that would meaningfully affects people's rights, safety, critical access, or strategic assets. AI in ServiceNow is strictly used to help manage IT resources for LANL. | Generative AI | Provide GenAI capabilities for use in multiple use cases for the IT divisions at LANL | Improved automation | Recommendation | 01/11/2024 | Developed in house | ServiceNow | Yes | Recommendation | Existing LANL IT ticket data stored in GCC Data Center certified as FedRAMP High. Training servers in same environment. | No | None of the above | No | Yes | Minimal impact as AI is used purely for predictive purposes to aid trained technicians. Humans remain the decision makers whether to use the AI's output. | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Yes, an appropriate appeal process has been established | Other | ||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-683 | Microsoft AI Builder | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Used for general purpose generative AI within a COTS product. | Agentic AI | Microsoft AI Builder solves the problem that most businesses want to use AI but lack the technical expertise and resources to build it themselves. It provides easy-to-use, pre-built AI tools that business users can implement without needing data scie | Microsoft AI Builder democratizes AI development by enabling business users to easily create custom AI models and automation solutions without extensive coding. It integrates with the Microsoft Power Platform to add AI capabilities like document processing and predictions directly into workflows, accelerating digital transformation through accessible, low-code AI solutions that enhance business processes and decision-making. | Microsoft AI Builder outputs structured business data and automated actions, including extracted information from documents, data predictions, and workflow automations that integrate directly into existing business processes and applications. | 20/06/2025 | Developed with both contracting and in-house resources | Microsoft | No | Microsoft AI Builder outputs structured business data and automated actions, including extracted information from documents, data predictions, and workflow automations that integrate directly into existing business processes and applications. | AI Builder models are trained on data that INL provides. This includes: Custom tables created by users; Imported datasets for prediction, form processing, object detection, and classification tasks; Data from Power Apps, Power Automate, and other Power Platform components. Training data remains within our Microsoft environment and tenant. | Not available | Yes | No | Not available | |||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-684 | LexisNexis | Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | This AI system does not provide a principle basis for decisions or actions with legal, material binding, or have any significat effects of those items listed in OMB-25-21 in the definition of High Impact. | Generative AI | searching legal database for case law | Better case law material for legal team | search results to input inquiries | 01/08/2025 | Purchased from a vendor | Nexus | Yes | search results to input inquiries | pretrained ai with access to legal database | Not applicable | No | Not applicable | None of the above | No | Not applicable | In-Progress | Not applicable | potential impacts are still being assessed | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | In-Progress |
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-685 | Microsoft 365 Apps for Enterprise | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-686 | Automated sorting of high repetition rate coherent diffraction data from XFELS | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | 'Coherent X-rays are routinely provided today by the latest Synchrotron and X-ray Free-electron Laser Sources. When these diffract from a crystal containing defects, interference leads to the formation of a modulated diffraction pattern called ''spec | 'Coherent X-rays are routinely provided today by the latest Synchrotron and X-ray Free-electron Laser Sources. When these diffract from a crystal containing defects, interference leads to the formation of a modulated diffraction pattern called ''speckle''. When the defects move around, they can be quantified by a correlation analysis technique called X-ray Photon Correlation Spectroscopy. But the speckles also change when the beam moves on the sample. By scanning the beam in a controlled way, the overlap between the adjacent regions gives redundancy to the data, which allows a solution of the inherent phase problem. This is the basis of the coherent X-ray ptychography method which can achieve image resolutions of 10nm, but only if the probe positions are known. The goal of this proposal will be to separate ''genuine'' fluctuations of a material sample from the inherent beam fluctuations at the high data rates of XFELs. Algorithms will be developed to calculate the correlations between all the coherent diffraction patterns arriving in a time series, then used to separate the two sources of fluctuation using the criterion that the ''natural'' thermal fluctuations do not repeat, while beam ones do. We separate the data stream into image and beam ''modes'' automatically.' | 'Coherent X-rays are routinely provided today by the latest Synchrotron and X-ray Free-electron Laser Sources. When these diffract from a crystal containing defects, interference leads to the formation of a modulated diffraction pattern called ''speckle''. When the defects move around, they can be quantified by a correlation analysis technique called X-ray Photon Correlation Spectroscopy. But the speckles also change when the beam moves on the sample. By scanning the beam in a controlled way, the overlap between the adjacent regions gives redundancy to the data, which allows a solution of the inherent phase problem. This is the basis of the coherent X-ray ptychography method which can achieve image resolutions of 10nm, but only if the probe positions are known. The goal of this proposal will be to separate ''genuine'' fluctuations of a material sample from the inherent beam fluctuations at the high data rates of XFELs. Algorithms will be developed to calculate the correlations between | 'Coherent X-rays are routinely provided today by the latest Synchrotron and X-ray Free-electron Laser Sources. When these diffract from a crystal containing defects, interference leads to the formation of a modulated diffraction pattern called ''speckle''. When the defects move around, they can be quantified by a correlation analysis technique called X-ray Photon Correlation Spectroscopy. But the speckles also change when the beam moves on the sample. By scanning the beam in a controlled way, the overlap between the adjacent regions gives redundancy to the data, which allows a solution of the inherent phase problem. This is the basis of the coherent X-ray ptychography method which can achieve image resolutions of 10nm, but only if the probe positions are known. The goal of this proposal will be to separate ''genuine'' fluctuations of a material sample from the inherent beam fluctuations at the high data rates of XFELs. Algorithms will be developed to calculate the correlations between | ] | |||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-689 | Software Implementation Assistant | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | This AI use case primarily assists in planning and document-related tasks for software implementations and does not significantly affect legal, material, or binding rights or critical access to services. | Agentic AI | This AI solution is designed to streamline the process of software implementation by automating crucial planning tasks, technical documentation, and risk assessments, which typically demand significant time and effort. The solution aims to support ju | The AI solution is expected to enhance the agency's mission by reducing the time, effort, and complexity involved in software implementation efforts, leading to significant cost savings. It will improve project planning, standardize technical documentation, aid junior resources, and allow subject matter experts to focus on critical tasks, ultimately improving project outcomes and resource utilization for the public's benefit. | The AI system outputs detailed project plans, organized tasks, technical documents, use cases, test scripts, and risk assessments specific to the software implementation project and application. | 01/10/2024 | Developed in house | Purchased from a Vendor - Tabnine | The AI system outputs detailed project plans, organized tasks, technical documents, use cases, test scripts, and risk assessments specific to the software implementation project and application. | |||||||||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-69 | Uncertainty Quantification and Instrument Automation to enable next generation cosmological discoveries | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project will develop AI-based tools to enable critical sectors for near-future cosmic applications. Uncertainty quantification is essential for performing discovery science now, and simulation-based inference offers a new approach. The automated | create new methods for uncertainty quantification in AI | ai algorithms | No | ai algorithms | my own simulated data; research datasets from scientific experiments | No | Yes | ||||||||||||||||
| Department Of Energy | NR HQ - Naval Reactors (NR) | DOE-690 | Copilot Studio | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Output of generative AI is administratively prohibited from being used as principal responses in the various scenarios considered high impact. | Generative AI | Copilot Studio will help NNL build custom AI agents that automate repetitive tasks, answer common questions, and streamline workflows across departments, all without requiring deep coding expertise. | Copilot Studio delivers significant benefits by enabling NNL to build custom AI agents that automate routine tasks, enhance decision-making, and improve operational efficiency, all through a low-code interface. | Depending on how the agent is designed, outputs can represent recommendations (e.g., suggesting a action), captured user inputs (like dates, names, or preferences), or contextual responses generated using AI. | Depending on how the agent is designed, outputs can represent recommendations (e.g., suggesting a action), captured user inputs (like dates, names, or preferences), or contextual responses generated using AI. | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-691 | Microsoft Outlook MUI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-692 | HALO AI | Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Demonstration of AI agents for dense data interfaces at a specific installation. The impact is limited to a small number of operational functions but could expand if succsessul. | Other | Not available | Increases operational awareness and response speed by automating data analysis, leading to faster, more informed decision-making. | Information summaries | No | Information summaries | No | No | Yes | ||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-693 | COREII | Pilot – The use case has been deployed in a limited test or pilot capacity. | Cybersecurity | Pilot | c) Not high-impact | Not high-impact | This AI system leverages Retreival Augmented Generation (RAG) enabled Generative AI over large corpuses of OT/ICS cybersecurity data and information to support decision making within critical infrastructure operations. | Generative AI | Support decision making for critical infrastructure OT cybersecurity operations. Provides a secure environment where senstive data can be ingested and retreived reliably. Reduces the time it takes to perform threat analysis, supply chain analysis, kn | Making information and knowledge easier to retreive and digest. Signficantly improves analysis time. Maximizes the transfer of knowledge to users of various backgrounds. Translates complex and complicated research results into accessible practical solutions. | Distillation of research results and findings into pratical decision making information. | 01/04/2025 | Developed with both contracting and in-house resources | 1899-12-31 11:59:00 | No | Distillation of research results and findings into pratical decision making information. | The Large Language Model is a pretrained open source model but the Retreival Augemented Generation (RAG) engine retreives from the entire OSTI.gov corpus, Known Exploited Vulnerabilty (KEV) dataset, CyOTE Precursor Analysis Reports, CyOTE Observable Dataset, entire Energy Information Agency EIA.gov dataset, all CISA OT Advisories, and ARC Web Market Analysis Studies (Proprietary). This was a sample of the data that can be added to by future users. | yes | No | Yes | Currently under SDR process and public release in progress. No URL available yet. | |||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-694 | GenAI for Classified Subject Area Categorization | Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Pilot to help derivative classifiers, classification analysts, researchers and others communicating conent externally to quickly determine CSA. | Generative AI | Help users, DCs and classification analysts more quickly determine Classified Subject Areas. | Help users, DCs and classification analysts more quickly determine Classified Subject Areas. | List of recommendations of classified subject areas | 19/09/2025 | Developed with both contracting and in-house resources | Open source development with INL Advanced Analytics Center of Excellence | Yes | List of recommendations of classified subject areas | Classified Subject Area documentation, validated documents | Not available | No | Yes | Not available | |||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-695 | Sindri | Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Agentic AI | Automate code assignment at requisition generation time | Maxmize use of Strategic Agreements and facilitate SCM reporting | Decision | Developed in house | Yes | Decision | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-696 | AI Safety Tool | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | a) High-impact | High-impact | Generative AI | Mitigate workplace ncidents | To reduce workplace incidents and increase employee safety. | Text based warning message | Text based warning message | |||||||||||||||||||||
| Department Of Energy | KCNSC - Kansas City National Security Campus (KCFO) | DOE-697 | Microsoft Co-Pilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Output does not serve as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety. Just productivity enhancement. | Generative AI | Overloaded workers often must perform routine, tedious administrative tasks that waste their valuable time | Eliminate tedious, time consuming adminstrative tasks, reduce time to accomplish routine tasks | Drafted, summarized, or prioritized emails Meetings scheduled at optimal times Generated follow-up emails or meeting notes Reports, presentations, or summaries generatedd from raw data Content rewritten or refined for clarity and tone Extract key insights from long documents Trends derived from spreadsheet analysis Charts and visualizations created from raw data Rrepetitive Excel tasks Automatated Presentation slides generated from bullet points or outlines Presentations with Improved visual appeal Speaker notes and talking points generated | 01/10/2027 | Purchased from a vendor | Microsoft | Yes | Drafted, summarized, or prioritized emails Meetings scheduled at optimal times Generated follow-up emails or meeting notes Reports, presentations, or summaries generatedd from raw data Content rewritten or refined for clarity and tone Extract key insights from long documents Trends derived from spreadsheet analysis Charts and visualizations created from raw data Rrepetitive Excel tasks Automatated Presentation slides generated from bullet points or outlines Presentations with Improved visual appeal Speaker notes and talking points generated | Model trained by Microsoft that was publically available | No | None of the above | No | ||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-698 | Microsoft Edge | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-699 | Anthropic Claude | Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Used for general purpose generative AI within a COTS product. | Generative AI | Generative AI tools like this solve the problem of time-consuming knowledge work by instantly providing expert-level assistance with writing, coding, research, and analysis. They automate repetitive tasks that require human expertise, allowing people | Boost employee productivity by helping to create content, assist in coding, and complex problem-solving tasks, while democratizing access to sophisticated capabilities like writing and design. This technology promises to accelerate innovation by serving as an intelligent collaborator, freeing humans to focus on higher-level strategic and creative work | The outputs of this tool is human-like text, code, and creative content that appears to be written by an expert. These tools produce coherent paragraphs, functional programming code, detailed analyses, creative stories, technical documentation, and conversational responses that are contextually relevant and professionally structured. | 03/07/2025 | Developed with both contracting and in-house resources | Anthropic | No | The outputs of this tool is human-like text, code, and creative content that appears to be written by an expert. These tools produce coherent paragraphs, functional programming code, detailed analyses, creative stories, technical documentation, and conversational responses that are contextually relevant and professionally structured. | At this time, INL plans to use non-CUI data with this solution. Claude's training data consists of a diverse mix of text from books, articles, websites, and other publicly available written content up to its knowledge cutoff date (January 2025) | Not available | No | No | Not available | |||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-70 | READS: Real-time Edge AI for Distributed Systems | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project will develop and deploy low-latency controls and prediction algorithms at the Fermilab accelerator complex | READS has two sub-projects. The first project created the means to stream live Main Injector and Recycler accelerator beam loss monitor data. This data is then fed to an AI model deployed on an FPGA so that it can infer, in realtime, the origin of beam loss, either Main Injector or Recycler, for each beam loss monitor in the tunnel enclosure. The second project aimed to improve upon traditional resonant beam extraction regulation techiniques using AI for use in the Fermilab Delivery Ring and Mu2e. | The ML outputs of the system are inferences as to the origin of beam loss in the Main Injector acclerator enclosure and also suggested regulation ramps to best improve the Spill Duty Factor in the Delivery Ring for Mu2e | 25/09/2025 | Developed in house | No | The ML outputs of the system are inferences as to the origin of beam loss in the Main Injector acclerator enclosure and also suggested regulation ramps to best improve the Spill Duty Factor in the Delivery Ring for Mu2e | research datasets from scientific experiments | No | Yes | unknown | |||||||||||||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-700 | Amazon Q Chatbot Pilot | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Not being used to make critical runtime decisions. | Generative AI | Quick access to AWS cloud engineering reference architectures and general AWS services information | Ability to quickly find and access AWS architecture and services information | Specific AWS architecture and services use-case responses to questions | 30/06/2025 | Developed with both contracting and in-house resources | AWS | No | Specific AWS architecture and services use-case responses to questions | Not involved in training | No | No | None of the above | No | N/A | In-Progress | Not applicable | Not applicable | Other | ||||||
| Department Of Energy | NR HQ - Naval Reactors (NR) | DOE-701 | Agentic AI for cybersecurity change request reviews | Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | c) Not high-impact | Not high-impact | Output of generative AI is administratively prohibited from being used as principal responses in the various scenarios considered high impact. | Agentic AI | Use AI Agents for cybersecurity review of documents and structured data to improve processes and to help identify areas of risk. | AI agents automate routine tasks, enhance decision-making, and improve operational efficiency. In addition, the AI Agent for cybersecurity will allow for a customized approach to tie specific requirement documents with structured data from a database. | Depending on how the final product is designed, outputs can represent recommendations (e.g., suggesting a action), captured user inputs (like dates, names, or preferences), or contextual responses generated using AI. | Depending on how the final product is designed, outputs can represent recommendations (e.g., suggesting a action), captured user inputs (like dates, names, or preferences), or contextual responses generated using AI. | ||||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-702 | HPC OpenAI-Compatible API | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Provides an LLM API to the entire laboratory free of charge | Other | Not available | Enhances scientific productivity by providing OpenAI-compatible APIs to the laboratory, allowing for rapid development of software and small code modifications for larger model usage. | Foundational endpoint for utilization by other applications | 08/09/2023 | Developed with both contracting and in-house resources | Open Source | Yes | Foundational endpoint for utilization by other applications | No | Yes | Yes | Not available | ||||||||||||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-703 | Anthropic Claude Pilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | General chatbot knowledge and capabilities | Generative AI | Quick access to general knowledge. | Ability to quickly find and access general knowledge and business productivity | General answers to questions, summarized documentation, document creation, general research. | 30/06/2025 | Purchased from a vendor | Anthropic | No | General answers to questions, summarized documentation, document creation, general research. | Not involved in training | No | No | None of the above | No | N/A | In-Progress | Not applicable | Not applicable | Other | ||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-704 | Ask HR | Pre-deployment – The use case is in a development or acquisition status. | Human Resources | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | GenAI-RAG on HR content | Productivity Tool | Text | 01/10/2024 | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-706 | AI-Enabled Tech Desk Agent | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Agentic AI | Allow staff to resolve tech issues through an AI-enabled self-service chat agent. | Improves staff support with faster, AI-driven issue resolution. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | Yes | ||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-707 | mass3 | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | GenAI on STEM-optimized LLMs | Productivity Tool | Text | 01/09/2025 | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | NR HQ - Naval Reactors (NR) | DOE-708 | Microsoft 365 Copilot | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Output of generative AI is administratively prohibited from being used as principal responses in the various scenarios considered high impact. | Generative AI | Modern enterprise search among Microsoft 365 content, AI integration into daily business applications, improvemed management of email and messaging. | Increased efficiency for routine tasks using Microsoft 365 applications | Text and file output in Microsoft 365 applications. Varies based on user prompts. | Text and file output in Microsoft 365 applications. Varies based on user prompts. | ||||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-709 | AI-Form and Questionnaire Assistant | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | This AI use case does not significantly affect legal, material, or binding rights or critical access to services. It is focused on enhancing the completion of standard forms and questionnaires, and does not involve decisions with legal, material, bin | Generative AI | "To automate and enhance the completion of standard forms and questionnaires used onsite at SRNS. This AI aims to reduce time, effort, errors, and enhance context understanding, consistent formatting, and independent outputs from the system." | Reduced development time, reduced errors and rework, better context understanding, and improved standardization and formatting, reducing time spent on forms by half. | "The AI system outputs draft forms and questionnaires which need to be reviewed and corrected by users." | 01/10/2024 | Developed in house | SRNS - OT In House Staff | "The AI system outputs draft forms and questionnaires which need to be reviewed and corrected by users." | |||||||||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-71 | Simulation-based inference for cosmology | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project will develop and use simulation-based inference to estimate cosmological parameters related to cosmic acceleration in the early and late universe — via the cosmic microwave background and strong gravitational lensing, respectively. This | DOE ECA award. apply SBI to strong lensing and cmb to infer cosmological parameters | prediction of numerical values of cosmology | No | prediction of numerical values of cosmology | research datasets from scientific experiments | No | Yes | ||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-710 | SQL Server Management Studio | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | 01/02/2023 | Purchased from a vendor | Microsoft | No | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-711 | Microsoft Exchange Server | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-712 | Collaborative Machine learning platform for Scientific Discovery | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | New advances in scientific applied machine learning (ML) offer an opportunity to leverage the commonalities, scientific insights and collected experience of the larger scientific user facility community across different experiments and facilities. T | New advances in scientific applied machine learning (ML) offer an opportunity to leverage the commonalities, scientific insights and collected experience of the larger scientific user facility community across different experiments and facilities. The result will be a shared platform that lowers the barrier to enter by leveraging the advances in machine learning methods across user facilities thus empowering domain scientists and data scientists to discover new science using existing and new data with new tools | New advances in scientific applied machine learning (ML) offer an opportunity to leverage the commonalities, scientific insights and collected experience of the larger scientific user facility community across different experiments and facilities. The result will be a shared platform that lowers the barrier to enter by leveraging the advances in machine learning methods across user facilities thus empowering domain scientists and data scientists to discover new science using existing and new data with new tools | New advances in scientific applied machine learning (ML) offer an opportunity to leverage the commonalities, scientific insights and collected experience of the larger scientific user facility community across different experiments and facilities. The result will be a shared platform that lowers the barrier to enter by leveraging the advances in machine learning methods across user facilities thus empowering domain scientists and data scientists to discover new science using existing and new data with new tools | ||||||||||||||||||||
| Department Of Energy | SRS - SRNL - Savannah River Site - Savannah River National Laboratory (EM) | DOE-713 | Performance Monitoring at the Salt Waste Processing Facility (SWPF) | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | b) Presumed high-impact, but determined not high-impact | Not high-impact | In development stage and impact has not been determined | Classical/Predictive Machine Learning | With data from SWPF instrumentation, train neural networks to consider process parameters and chemical speciation data from incoming salt batches to predict filtration rate performance. | Proactive monitoring of complex facility processes with only select instrumentation data to guide prediction process operations | Textual output; processing parameters | Textual output; processing parameters | ||||||||||||||||||||
| Department Of Energy | SRS - SRNL - Savannah River Site - Savannah River National Laboratory (EM) | DOE-714 | Advanced Long Term Environmental Monitoring Systems (ALTEMIS) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Energy & the Environment | Pilot | a) High-impact | High-impact | Classical/Predictive Machine Learning | Reduce the cost of long-term monitoring using integrated sensing technologies and AI/ML to forecast groundwater plume migration and anomalies. | Proactive, rather than reactive, monitoring of complex geochemical systems. | Spatiotemporal optimization of sensor locations, correlate proxy variables (e.g., pH, specific conductance, water table elevation, etc.) with contaminants, measure proxy variables with various sensing modalities, predict concentrations across space and time given proxy variables. | 01/09/2022 | Developed in house | Python | Yes | Spatiotemporal optimization of sensor locations, correlate proxy variables (e.g., pH, specific conductance, water table elevation, etc.) with contaminants, measure proxy variables with various sensing modalities, predict concentrations across space and time given proxy variables. | Sensor systems: In Situ (vendor name) well sensors, electrical resistivity tomography system, custom vertically resolved temperature sensors | altemisai.org | No | Yes | |||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-715 | Custom GenAI for ATR Fuel Conversion Project (FuelGPT) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | AI Pilot specific to one project related to HEU to LEU fuel conversion. | Generative AI | Engineers spending extensive time manually iterating through documents during the fuel conversion project. | Engineers save time by using the AI to quickly find answers and relevant source documents. | Engineers save time by using the AI to quickly find answers and relevant source documents. | 04/08/2025 | Developed with both contracting and in-house resources | Open source development | Yes | Engineers save time by using the AI to quickly find answers and relevant source documents. | HEU to LEU fuel conversion project documents | Not available | No | Yes | Not available | |||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-716 | Denoising Diffusion to Accelerate Detector Simulation | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This program aims to develop generative models for quickly simulating showers of particles in calorimeters for LHC experiments | This effort is exploring generative AI to replace costly detector simulation. This would enable faster, more accurate simulation, accelerating and enhancing scientific results and allowing easier use of GPU coprocessors at HPC centers. | The AI system outputs simulated detector hits (energy deposits) in one or more subdetectors of the particle physics experiment. | 25/09/2025 | Developed in house | No | The AI system outputs simulated detector hits (energy deposits) in one or more subdetectors of the particle physics experiment. | research datasets from scientific experiments | No | Yes | unknown | |||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-717 | Tackling Solid-State Electrochemical Interfaces from Structure to Function Utilizing HPC and Machine Learning Tools | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Applying AI/ML methods to discover better solid state battery interface material using HPC | Applying AI/ML methods to discover better solid state battery interface material using HPC | Applying AI/ML methods to discover better solid state battery interface material using HPC | Applying AI/ML methods to discover better solid state battery interface material using HPC | ||||||||||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-718 | 5G-enabled Reliable and Decentralized IoT Framework with Blockchain | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | We propose to develop an end-to-end, 5G-enabled, reliable, and decentralized IoT framework that improves data collection and communication among edge computing devices for science applications. | We propose to develop an end-to-end, 5G-enabled, reliable, and decentralized IoT framework that improves data collection and communication among edge computing devices for science applications. | We propose to develop an end-to-end, 5G-enabled, reliable, and decentralized IoT framework that improves data collection and communication among edge computing devices for science applications. | We propose to develop an end-to-end, 5G-enabled, reliable, and decentralized IoT framework that improves data collection and communication among edge computing devices for science applications. | ||||||||||||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-719 | ServiceNow AI Search | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | ServiceNow's AI output is NOT a principal driver of a government decision that would meaningfully affects people's rights, safety, critical access, or strategic assets. AI in ServiceNow is strictly used to help manage IT resources for LANL. | Natural Language Processing | Provide better search experience for finding knowledge articles, tickets, and other data. Ability to pull from sources external to ServiceNow | Improved automation | Recommendation | 01/11/2024 | Purchased from a vendor | ServiceNow | Yes | Recommendation | Existing LANL IT ticket data stored in GCC Data Center certified as FedRAMP High. Training servers in same environment. | No | None of the above | No | Yes | Minimal impact as AI is used purely for predictive purposes to aid trained technicians. Humans remain the decision makers whether to use the AI's output. | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Yes, an appropriate appeal process has been established | Other | ||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-72 | Extreme data reduction for the edge | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This projects develops AI algorithms and tools for near-sensor data reduction in custom hardware. | AI tools are developed for embedded inference in real-time processing systems for scientific experiments. This can accelerate sientific discovery and time to science thus enabling large cost savings and DOE scientific prestige. | It can be an AI algorithm from prediction to data compression to control (decision making). | No | It can be an AI algorithm from prediction to data compression to control (decision making). | research datasets from scientific experiments | No | Yes | ||||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-720 | Hub Biography Builder | Pilot – The use case has been deployed in a limited test or pilot capacity. | Human Resources | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Automatically generate draft professional bios from resumes, CVs, and other inputs. | Saves time and improves quality of professional staff bios. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/07/2025 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | Yes | ||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-721 | Microsoft Skype App | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | 01/02/2023 | Purchased from a vendor | Microsoft | No | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-722 | Cloud Knowledge Hub | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Answer staff questions on cloud services and best practices via an AI knowledge hub. | Expands staff expertise and accelerates cloud adoption. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-723 | Crickets | Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Classical/Predictive Machine Learning | Facilitate analysis and user intent on detect of access to potentially inappropriate web content | Reduce analysis cycle and response times | Prediction | 01/10/2024 | Developed in house | Yes | Prediction | None | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-724 | Neptune | Pre-deployment – The use case is in a development or acquisition status. | International Affairs | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Other | Content analysis across domains and structured/unstructured content | Productivity Tool | Text | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-725 | AccessAI | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | GenAI-RAG on corporate content | Productivity Tool | Text | 01/10/2024 | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-726 | Machine Learning to support Operational Efficiency | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | The AI use case focuses on capturing operational metrics and institutional knowledge and does not significantly affect legal, material, or critical access to services rights. | Classical/Predictive Machine Learning | Reducing risk of losing institutional knowledge associated with an aging workforce. Ensure time to proficiency for junior operators is minimized through storing and accessing institutional knowledge to understand how operational efficiency can be im | Less unanticipated downtime, improved decision making and improved manufacturing processes with optimized output. | The system will evaluate process output, and seek to provide recommendations to realize the desired output along with the associated logic for why the change will yield the projected output. | 01/10/2024 | Developed in house | SRNS - OT In House Staff | The system will evaluate process output, and seek to provide recommendations to realize the desired output along with the associated logic for why the change will yield the projected output. | |||||||||||||||||
| Department Of Energy | NA-IM (Enterprise) - Office of the Associate Administrator for Information Management and Chief Information Officer (HQ-Enterprise) (NNSA HQ) | DOE-727 | DNA-P Use Cases Leaverging Artificial Intellegence (Deployed) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | This use case is determined to be not high-impact base on definitoin in Section 5 of the OMB Memorandum M-25-21. | Other | "- identify data clusters/trend analysis - identify data discrepencies/data enrichment - generate suggestions (including generating reports, data linkages, and courses of action) - generate graphical and natural language analyses" | '-save time for DOE DNA-P users - improve DOE/NNSA data quality - improve DOE/NNSA safety operations | "- data clusters - summaries - recommendations (data linkages, data entries, and courses of action) - graphical and natural language analyses - extracted entities -responses to queries via RAG workflows" | 01/04/2024 | Developed in house | Palantir | Yes | "- data clusters - summaries - recommendations (data linkages, data entries, and courses of action) - graphical and natural language analyses - extracted entities -responses to queries via RAG workflows" | '- No custom models developed - AI use cases have been deployed on publicly available information as well as agency provided data | No | PIA not publically available | None of the above | No | PIA not publically available | ||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-728 | Create Statement of Work (SOW) for Procurement | Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Ensure procurement SOWs meet standards through guided AI review and alignment checks. | Ensures compliance and quality in procurement processes. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/05/2025 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | Yes | ||||||||||||
| Department Of Energy | PPPL - Princeton Plasma Physics Laboratory (SC43 OIM) | DOE-729 | AI workflow to process voicemail user service requests and translate them into actionable IT service tickets | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | AI workflow to process voicemail user service requests and translate them into actionable IT service tickets | Natural Language Processing | Faster user support and an additional chain of communication for users to report issues. More organized and consistent information extraction from a message. | Faster user support and an additional chain of communication for users to report issues. More organized and consistent information extraction from a message. | AI workflow to process voicemail user service requests and translate them into actionable IT service tickets | 01/09/2025 | Developed in house | No | AI workflow to process voicemail user service requests and translate them into actionable IT service tickets | No specific training data, it's using generally available LLM(s) | No | No | Google Gemini | In-Progress | Reduction of admin burden. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | ||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-73 | Machine Learning for Accelerator Operations Using Big Data Analytics / L-CAPE | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | Big data analytics for anomaly prediction and classification, enabling automatic mitigation, operational savings, and predictive maintenance of the Fermilab LINAC | ML models are deployed for the FNAL's Linac.to detect, label andact upon faults. The usage of ML will jimporve our fault labelingand detection. This will allow for improved operatioal efficeincy, fault statistics, and preventitive maintenance. To my knowledge this is the first global accelerator operations ML system. | The ML outputs to a dashboard withfault labels and downtime predictiojns.The model will also try and predict dwwontime and possible actions. | 25/09/2025 | Developed in house | No | The ML outputs to a dashboard withfault labels and downtime predictiojns.The model will also try and predict dwwontime and possible actions. | my own simulated data; research datasets from scientific experiments | No | Yes | unknown | |||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-730 | AI/ML in Particle Accelerator Controls System | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Improving the safety and performance of particle accelerator operations through artificial intelligence assisted control systems. | Improving the safety and performance of particle accelerator operations through artificial intelligence assisted control systems. | Improving the safety and performance of particle accelerator operations through artificial intelligence assisted control systems. | Improving the safety and performance of particle accelerator operations through artificial intelligence assisted control systems. | ||||||||||||||||||||
| Department Of Energy | EE HQ - EE Headquarters (EE) | DOE-731 | Expanding Consumer Participation in Consumer Electronics Recycling Programs Utilizing Targeted Marketing Campaigns | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Provides advisory information only. AI use will be used once to compile laws and regulations and will not be ongoing use. | Natural Language Processing | Identify laws and regulations related to e-waste | Consolidate and provide a more convenient venue to find laws and regulations related to e-waste disposal and recycling | Text describing laws and regulations on e-waste disposal and recycling and links to support documentation | Text describing laws and regulations on e-waste disposal and recycling and links to support documentation | No | |||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-732 | Chatlab | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Used for general purose generative AI within a COTS product. | Generative AI | Not available | Not available | Not available | Developed with both contracting and in-house resources | Not available | No | Not available | Not available | Not available | No | Yes | Not available | ||||||||||||
| Department Of Energy | OREM - ORCC - Oak Ridge EM - Oak Ridge Cleanup Contract (EM) | DOE-734 | CoPilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Not full deployment; testing only in preparation for forced Microsoft roll out in October 2025 | Other | Communication suggestions | better communication | provides intelligent suggestions, and boosts productivity | 19/08/2025 | Purchased from a vendor | Microsoft | Yes | provides intelligent suggestions, and boosts productivity | No | No | In-Progress | TBS | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | ||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-735 | Microsoft OneNote MUI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | SRS - SRNL - Savannah River Site - Savannah River National Laboratory (EM) | DOE-736 | Identifying Controlling Variables for Mercury Vapor Release at Y-12's Alpha-4 | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | Correlate indoor/outdoor meteorological conditions with mercury vapor releases such that PEL exceedances can be forecasted, improving respiratory worker safety and enhancing work planning | Intraday forecast of mercury concentration in buildings given past chronology of meteorological conditions | Prediction of elevated mercury concentrations in the building | Prediction of elevated mercury concentrations in the building | |||||||||||||||||||||
| Department Of Energy | EE HQ - EE Headquarters (EE) | DOE-738 | FY19 Lab Call – Livewire Data Sharing Platform | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Exploring different methods to aid data discovery and providing data information within the platform | Generative AI | Lack of easily-accessible, organized information on transportation and mobility-related projects. | Enable time savings and quick answers to queries | GenAI chatbot feature providing info on datasets in the platform or FAQs on how to use the platform | GenAI chatbot feature providing info on datasets in the platform or FAQs on how to use the platform | No | |||||||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-739 | Microsoft Copilot Studio | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Build enterprise conversational AI agents that securely connect to data and workflows. | Enables secure, scalable automation of enterprise workflows. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Purchased from a vendor | Microsoft | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-74 | Geo Threat Observable for structure cyber threat related to the energy sector | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Not available | Other | Not available | Correlations, recommendations and predictions for improved cyber response. | Collected stored in graphdb and used in machine learning for similarities of threat enabling | Developed with both contracting and in-house resources | Not available | No | Collected stored in graphdb and used in machine learning for similarities of threat enabling | Open source threat intelligence collected, NLP used to scrape information off of cyber incident reports and websites, some data from cyber sensors, threat feeds and some data from manual threat analysis activities. | Not available | No | Yes | Not available | ||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-742 | AI for Financial Analysis | Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Doesn't meet criteria. | Classical/Predictive Machine Learning | Productivity enhancement for financial activities | Enhance automation of data processing for financial professionals. | Enhanced financial process workflows | Enhanced financial process workflows | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-744 | Microsoft PowerPoint MUI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-745 | AI/ML to design and optimize materials and their properties | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Design and optimize materials and their properties for Quantum Information Science and clean energy using AI/ML | Design and optimize materials and their properties for Quantum Information Science and clean energy using AI/ML | Design and optimize materials and their properties for Quantum Information Science and clean energy using AI/ML | Design and optimize materials and their properties for Quantum Information Science and clean energy using AI/ML | ||||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-746 | EES&T Communications Impact | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Used for general purpose uses cases within EES&T organization | Other | Not available | Provides actionable insights into communication effectiveness, enabling data-driven improvements in outreach and stakeholder engagement. | Information summaries | Developed with both contracting and in-house resources | Not available | No | Information summaries | No | No | Yes | Not available | |||||||||||||
| Department Of Energy | NA-IM (Enterprise) - Office of the Associate Administrator for Information Management and Chief Information Officer (HQ-Enterprise) (NNSA HQ) | DOE-747 | NA-CI Salesforce | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | This use case is determined to be not high-impact base on definitoin in Section 5 of the OMB Memorandum M-25-21. | Other | Solves inefficiency and inconsistency of manual data capture of stakeholder activities. | Improve efficiency, data quality, and visibility. | Synced records of emails, calendar events, and contacts from Outlook into Salesforce. | 16/09/2025 | Purchased from a vendor | Salesforce | Yes | Synced records of emails, calendar events, and contacts from Outlook into Salesforce. | N/A. NA-CIs data is not used for model training or fine-tuning. This is only processed for synchronization wihtin the secure GovCloud Plus environment. | Not Publicly Available | Yes | None of the above | Yes | No open source code | In-Progress | |||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-748 | Microsoft Power BI Desktop | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-749 | Microsoft Azure PowerShell | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-75 | Road Conditions from IBM Watson for INL | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Deemed not genAI per last year's submission. NA | Other | Not available | Not available | Not available | Developed with both contracting and in-house resources | Not available | No | Not available | Not available | Not available | No | No | Not available | ||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-750 | Intelligent Acquisition and Reconstruction for Hyper-Spectral Tomography Systems | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | We will develop artificial intelligence (AI) and machine learning (ML) algorithms to enable dramatic improvements in the throughput and performance of hyperspectral (i.e., multiple energies) computed tomography (HSCT) beamlines at DOE BES Scientific | We will develop artificial intelligence (AI) and machine learning (ML) algorithms to enable dramatic improvements in the throughput and performance of hyperspectral (i.e., multiple energies) computed tomography (HSCT) beamlines at DOE BES Scientific User Facilities (SUFs). We will demonstrate the utility of our algorithms by carefully designing experiments for energy materials at HSCT beamlines available at the Spallation Neutron Source (SNS) and the National Synchrotron Light Source II (NSLS-II). We will also develop AI driven data acquisition algorithms that will optimize the scanning strategy on-the-fly, in order to obtain the fewest yet most informative set of measurements (i.e. reducing beam time and/or number of projections in a data set). Our team will provide ML based reconstruction algorithms that can produce high quality reconstructions from incomplete, sparse and low signal-to-noise ratio datasets enabling real-time feedback and ensuring best possible reconstruction on completion of the experiment. Finally, our efforts will be available to the user community at both facilities via a general user interface. | We will develop artificial intelligence (AI) and machine learning (ML) algorithms to enable dramatic improvements in the throughput and performance of hyperspectral (i.e., multiple energies) computed tomography (HSCT) beamlines at DOE BES Scientific User Facilities (SUFs). We will demonstrate the utility of our algorithms by carefully designing experiments for energy materials at HSCT beamlines available at the Spallation Neutron Source (SNS) and the National Synchrotron Light Source II (NSLS-II). We will also develop AI driven data acquisition algorithms that will optimize the scanning strategy on-the-fly, in order to obtain the fewest yet most informative set of measurements (i.e. reducing beam time and/or number of projections in a data set). Our team will provide ML based reconstruction algorithms that can produce high quality reconstructions from incomplete, sparse and low signal-to-noise ratio datasets enabling real-time feedback and ensuring best possible reconstruction on co | We will develop artificial intelligence (AI) and machine learning (ML) algorithms to enable dramatic improvements in the throughput and performance of hyperspectral (i.e., multiple energies) computed tomography (HSCT) beamlines at DOE BES Scientific User Facilities (SUFs). We will demonstrate the utility of our algorithms by carefully designing experiments for energy materials at HSCT beamlines available at the Spallation Neutron Source (SNS) and the National Synchrotron Light Source II (NSLS-II). We will also develop AI driven data acquisition algorithms that will optimize the scanning strategy on-the-fly, in order to obtain the fewest yet most informative set of measurements (i.e. reducing beam time and/or number of projections in a data set). Our team will provide ML based reconstruction algorithms that can produce high quality reconstructions from incomplete, sparse and low signal-to-noise ratio datasets enabling real-time feedback and ensuring best possible reconstruction on co | ||||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-76 | Deep Learning Malware Analysis for reusable cyber defenses. | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Not available | Other | Not available | Identify commonalities in malware | Deep Learning Malware Analysis for reusable cyber defenses. | Developed with both contracting and in-house resources | Not available | No | Deep Learning Malware Analysis for reusable cyber defenses. | Data for malware binaries come mainly from open source malware repositories collected; @DisCo application dissassembles and stores into a graph db for management and vector embedded queries to identify common malware functions useful for cyber defenses. | Not available | No | Yes | Not available | ||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-93 | To drive insights on the power system reliability, cost, and operations during the energy transition with and without FECM technologies | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | To drive insights on the power system reliability, cost, and operations during the energy transition with and without FECM technologies | Generate predictive scenarios | Predictive scenarios | Predictive scenarios | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-94 | To drive insights on the dependencies between the natural gas and electricity sectors to increase reliability of the NG system | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | To drive insights on the dependencies between the natural gas and electricity sectors to increase reliability of the NG system | Generate predictive scenarios | Predictive scenarios | Predictive scenarios | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-98 | Data platform to expedite access and reuse of carbon ore data for materials, manufacturing and research | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Data platform to expedite access and reuse of carbon ore data for materials, manufacturing and research | Expedite access and reuse of carbon ore data | Data | Data | ||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Design Your Facility | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Discover Financial Business Intelligence System (FBIS) Report Analysis (Sub-CAN Line Items) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Procurement & Financial Management | Pilot | c) Not high-impact | Not high-impact | Agentic AI | How can spend plans be more real-time and less manual to create and keep up to date? The data used to create and monitor spend plans are spread across multiple out-of-the-box reports available in FBIS. Combing through them is time-intensive, making it more difficult to keep spend plans up to date in real-time. This task is especially challenging due to the unstandardized nature of sub-CAN (Congressional Appropriation Number) line item descriptions. | More efficient spend plan creation and monitoring. This AI-enabled tool provides a resource for ACF budget managers to identify related line items across disparate reports in FBIS. | Suggested budget line items related to a user-provided description. AI is used to both search for relevant data (CANs, categories, and sub-CAN line item descriptions) and aggregate information (e.g. supplier name, document number, total obligations, and user-provided projected costs) to create an up-to-date, real-time spend plan. | 25/04/2026 | c) Developed with both contracting and in-house resources | Palantir | Yes | Suggested budget line items related to a user-provided description. AI is used to both search for relevant data (CANs, categories, and sub-CAN line item descriptions) and aggregate information (e.g. supplier name, document number, total obligations, and user-provided projected costs) to create an up-to-date, real-time spend plan. | RAG implementation using commercially-available LLMs and data from FBIS | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Grant Spend Health Analysis | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Structuring and Validating Completeness of Case Data Information | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Government Benefits Processing | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The use of AI is narrowly focused on extracting key data points from scanned notices. The outputs do not serve as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the 6 cases outlined in M-25-21, page 19. | Agentic AI | How can referrals from the Department of Homeland Security (DHS) be more quickly reviewed for critical pieces of information about their parents and the reason for separation? Since November 2024, when a minor is separated from their parent or legal guardian, the U.S. Customs and Border Protection (CBP) within DHS is required to send certain pieces of information about the parent/legal guardian in accordance with the Ms L vs ICE settlement. CBP send this information in a block of free text and sometimes do not include the required information. Historically, ACF's Office of Refugee Resettlement (ORR)s tracking of required information has been done manually and inconsistently. | More easily searchable and accurate data on family separations Quicker validation of CBP compliance in essential data sharing for separated families, enabling faster follow-up as needed to receive any missing data points | Structured data asset of the critical data points needed about a separation case. AI is used to conduct initial parsing of the data provided by CBP and highlight whether or not the required fields from the Ms L vs. ICE settlement are included and can therefore be updated into the child's profile in ORR's data system. ORR's Intakes Team does final review and in cases where data appears to be missing, the ORR Intakes Team reaches back out to CBP for that information. | 24/12/2026 | c) Developed with both contracting and in-house resources | Palantir | Yes | Structured data asset of the critical data points needed about a separation case. AI is used to conduct initial parsing of the data provided by CBP and highlight whether or not the required fields from the Ms L vs. ICE settlement are included and can therefore be updated into the child's profile in ORR's data system. ORR's Intakes Team does final review and in cases where data appears to be missing, the ORR Intakes Team reaches back out to CBP for that information. | No training or fine-tuning; we are using secure commercially available LLMs. | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | |||||||||||
| Department Of Health And Human Services | HHS/ACF | Structuring Notice of Concern Data | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Government Benefits Processing | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The use of AI is narrowly focused on extracting key data points from unstructured narratives and validating data completeness. The outputs do not serve as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the 6 cases outlined in M-25-21, page 19. | Computer Vision | How can the Office of Refugee Resettlement (ORR) clear its backlog of notices of concern (NOC) and minimize backlog in the future? Notice of Concern (NOC) forms contain critical information regarding safety of children who have left ORR's care. Some forms are received as scans, with the information not in machine-readable format. ORR receives hundreds of NOCs a day. Due to personnel shortage in the Prevention of Child Abuse and Neglect Team (PCAN) team responsible for reviewing and acting on NOCs, as of October 2024 there was a backlog of over 30,000 NOCs. | More effective and efficient review of NOCs With AI-enabled structuring of data in NOCs received in scanned formats, ORR can reduce the large backlog that has accumulated. | Structured data parsed from the subset of NOCs that are scans of documents AI is not used to triage NOCs, just to parse information from scanned documents. The parsed information is presented to the PCAN team alongside the original document for review and action. | 24/12/2026 | c) Developed with both contracting and in-house resources | Palantir | Yes | Structured data parsed from the subset of NOCs that are scans of documents AI is not used to triage NOCs, just to parse information from scanned documents. The parsed information is presented to the PCAN team alongside the original document for review and action. | No training or fine-tuning; we are using secure commercially available LLMs. | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | |||||||||||
| Department Of Health And Human Services | HHS/ACF | Unaccompanied Children Program Policy & Procedure Research Tool | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Service Delivery | Pilot | c) Not high-impact | Not high-impact | Agentic AI | How can the Office of Refugee Resettlement (ORR) research the laws, standards, policies, and procedures applicable to monitoring visits more quickly while maintaining thoroughness? The Office of Refugee Resettlement (ORR) conducts monitoring visits at least monthly to ensure that care providers meet minimum standards for the care and timely release of unaccompanied children, and that they abide by all Federal and State laws and regulations, licensing and accreditation standards, ORR policies and procedures, and child welfare standards. If ORR monitoring finds a care provider to be out of compliance with requirements, ORR issues corrective action findings and requires the care provider to resolve the issue within a specified time frame. Compliance determination involves research into the various laws, standards, policies, and procedures. | Faster issuance of well-informed corrective action findings The goal of the UC Program Policy & Research Tool is to speed up this process as children's health and well-being may be impacted before a corrective action finding is issued and the issue is resolved. The UC Program Policy & Procedure Research Tool speeds up research of relevant laws, standards, policies, and procedures content curated and approved by ORR's policy team. This research is one part of the process that informs ORR's monitoring team's decisions on whether corrective actions are needed and if so, what corrective actions. | Initial assessment of whether a care provider is in compliance with applicable laws, standards, policies, and procedures applicable to the care of unaccompanied children, with an explanation of evidence pulled from monitoring visit reports and the policy documents and accurate citations. AI is not used to suggest corrective actions but rather support determination of whether care providers are in compliance. | 25/07/2026 | c) Developed with both contracting and in-house resources | MIT Lincoln Labs | Yes | Initial assessment of whether a care provider is in compliance with applicable laws, standards, policies, and procedures applicable to the care of unaccompanied children, with an explanation of evidence pulled from monitoring visit reports and the policy documents and accurate citations. AI is not used to suggest corrective actions but rather support determination of whether care providers are in compliance. | RAG implementation using commercially-available LLMs and curated dataset of applicable laws, standards, policies, and procedures applicable to the care of unaccompanied children | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Unaccompanied Children Process Model Digital Twins | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Analyzing Public Comments on Proposed Rule | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Outreach List Segmentation | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | https://www.hhs.gov/about/agencies/asa/ohr/hr-library/index.html | Ask HR Policy | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Generative AI | How can ACF better support Executive Officers and Administrative Officers find relevant Human Resources (HR) policy? HHS has over 70 HR policies that ACF Executive Officers/Administrative Officers must navigate when trying to find an answer to a question. | Faster fact-finding on HHS's HR policies Rather than clicking through multiple policies to try to identify the ones with relevant information to their question, Executive Officers/Administrative Officers can ask a question in natural language. | Suggested answer to an HR-related question, with thought process and links to relevant section of official document(s). Ask HR Policy provides a secure interface permissioned only to select ACF Executive Officers and Administrative Officers. Users type in questions about managing employees covered by the HR Policy Library, and Ask HR Policy provides a narrative of its thought process then suggests an answer based in the documentation, alongside links that take the user to the relevant section of the official document. | 24/05/2026 | a) Purchased from a vendor | Palantir | Yes | Suggested answer to an HR-related question, with thought process and links to relevant section of official document(s). Ask HR Policy provides a secure interface permissioned only to select ACF Executive Officers and Administrative Officers. Users type in questions about managing employees covered by the HR Policy Library, and Ask HR Policy provides a narrative of its thought process then suggests an answer based in the documentation, alongside links that take the user to the relevant section of the official document. | RAG implementation using commercially-available LLMs and HHS's Official HR Policy Library | https://www.hhs.gov/about/agencies/asa/ohr/hr-library/index.html | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||
| Department Of Health And Human Services | HHS/ACF | Child Welfare Information Automated Inquiry System (Note: previously named "Child Welfare Information Gateway OneReach Application") | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | How can Child Welfare Information Hotline callers get the right information faster without increasing staffing? The Children's Bureau runs the Children Welfare Information Gateway, a connection to trusted resources on the child welfare continuum. The Information Gateway has a hotline for answering questions or requesting information: https://www.childwelfare.gov/stay-connected/contact/. Callers to the hotline range from those having more routine questions (such as asking for the contact information for their state's child welfare agency) to reporting more complex, nuanced situations. | Approximately a quarter of inquires to the Child Welfare Information Hotline are assisted by AI, freeing up time for staff to focus on more complex, nuanced cases. In the first 4 years, this amounts to ~2,500 inquiries assisted by AI. | The Information Gateway Hotline connects to a phone interactive voice response (IVR). The Information Gateway hotline maintains a database of state hotlines for reporting child abuse and neglect that it can connect a caller to based on their inbound phone area code. Additionally, the Information Gateway Hotline offers a limited FAQ texting service that utilizes natural language processing to answer user queries. | 20/03/2026 | a) Purchased from a vendor | Amazon Connect (current) OneReach (previous, deprecated) | No | The Information Gateway Hotline connects to a phone interactive voice response (IVR). The Information Gateway hotline maintains a database of state hotlines for reporting child abuse and neglect that it can connect a caller to based on their inbound phone area code. Additionally, the Information Gateway Hotline offers a limited FAQ texting service that utilizes natural language processing to answer user queries. | User queries are used for reinforcement training by a human AI trainer and to develop additional FAQs. | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Collective Bargaining Compass | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Generative AI | How can the HHS-NTEU collective bargaining agreement be more easily referenced? The HHS NTEU Collective Bargaining Agreement and associated rules are over 400 pages long, covering numerous topics related to employer-labor relations. | Faster fact-finding on the HHS-NTEU collective bargaining agreement. Rather than searching for relevant passages through keyword matching, people can ask their questions in natural language. | Suggested answer to a question, with thought process and links to relevant section of official document(s). The Collective Bargaining Compass provides a secure Virtual Assistant interface permissioned only to select ACF managers. Users type in questions about managing employees covered by the Collective Bargaining Agreement, and the Virtual Assistant provides a narrative of its thought process then suggests an answer based in the documentation, alongside links that take the user to the relevant section of the official document. | 24/02/2026 | a) Purchased from a vendor | Palantir | Yes | Suggested answer to a question, with thought process and links to relevant section of official document(s). The Collective Bargaining Compass provides a secure Virtual Assistant interface permissioned only to select ACF managers. Users type in questions about managing employees covered by the Collective Bargaining Agreement, and the Virtual Assistant provides a narrative of its thought process then suggests an answer based in the documentation, alongside links that take the user to the relevant section of the official document. | RAG implementation using commercially-available LLMs and the latest HHS/NTEU collective bargaining agreement, plus any relevant procedures and follow-on memoranda. | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Discover User Assistant | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | How can ACF Discover users better understand how to use the platform? ACF Discover is a relatively new platform designed to streamline various analyses and data maintenance responsibilities of ACF Executive Officers and other ACF administrative and management staff. When launched, ACF Discover users were trained with instructional documentation. However, some users still find it difficult to understand the different software modules and how to use them. | Easier navigation of the ACF Discover Staff Management platform Rather than searching for relevant passages through keyword matching, people can ask their questions in natural language. | Suggested answer to question, with thought process and links to relevant section of official document(s). The User Documentation Assistant provides a secure virtual assistant interface that is only available to ACF Discover Users. Users are able to ask the assistant specific questions about the capabilities of ACF Discover along with how to leverage tools and applications. The Virtual Assistant is able to provide answers by referencing the User Reference guide. | 24/01/2026 | a) Purchased from a vendor | Palantir | Yes | Suggested answer to question, with thought process and links to relevant section of official document(s). The User Documentation Assistant provides a secure virtual assistant interface that is only available to ACF Discover Users. Users are able to ask the assistant specific questions about the capabilities of ACF Discover along with how to leverage tools and applications. The Virtual Assistant is able to provide answers by referencing the User Reference guide. | RAG implementation using commercially-available LLMs and the latest ACF Discover user reference guide. | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Policy Knowledge Base Data Migration | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Qualitative Analysis | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | How can thematic coding and trend analysis across qualitative data be done more efficiently? ACF staff often conduct surveys and interviews, which generate qualitative data that needs to be analyzed for themes and trends. The standard approach involves multiple human passes of labeling the data for analysis, which is very time-intensive. | Faster initial labeling of qualitative data that human reviewers are then able to correct and iterate from | ACF employees have several tools available to them to support qualitative analysis. Typically the tools are asked to assist with one of the following scenarios: - Take a user-provided list of topics and text passages to initially categorize passages by topic(s) - Suggest potential categories for organizing text passages - Identify thematic trends across a corpus of narrative data - Conduct sentiment analysis | 23/03/2026 | a) Purchased from a vendor | Lumivero, Qualtrics, Credal, Ask Sage | Yes | ACF employees have several tools available to them to support qualitative analysis. Typically the tools are asked to assist with one of the following scenarios: - Take a user-provided list of topics and text passages to initially categorize passages by topic(s) - Suggest potential categories for organizing text passages - Identify thematic trends across a corpus of narrative data - Conduct sentiment analysis | RAG implementation using commercially-available LLMs and user-provided narrative data | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Intakes Referral Parser | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Training and Technical Assistance (TTA) GenAI Chatbot | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Funding Opportunity Redundancy Analysis | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Unaccompanied Child Sponsor Identity Verification | a) Pre-deployment The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | a) High-impact | High-impact | Computer Vision | How can ACF strengthen sponsor identity verification to reduce fraudulent sponsor applications? Throughout the process of sponsor vetting, there are various touchpoints where the identity of the sponsor is critical to ensuring a child will be placed with a safe guardian. Knowing that the person (sponsor, household adult) is who they claim to be and that the person presenting at different points of the sponsor application process is consistently the person who was vetted is essential to ensure the welfare of a child. | Increased certainty that the adults applying to sponsor/care for a child release from ORR care are who they claim to be, so that they may be properly vetted in providing a safe environment for children post-release. | Confirmation that person A is person A at all touchpoints of the sponsor vetting process. | Confirmation that person A is person A at all touchpoints of the sponsor vetting process. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Head Start Correspondence Categorizer | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | How can regional management and program specialists with large grant workloads keep track of all the correspondence being received by staff in their regional offices? The Office of Head Start's (OHS) program specialists receive correspondence through the Head Start Enterprise System (HSES) for a variety of topics. Many of the requests, questions, and reports are tracked to completion in another system that has more robust alerting and workflow management capabilities. OHS is automating the data transfer between these two systems, introducing a data processing step that helps categorize correspondence. | More efficient tracking of correspondence. The AI categorization of correspondence can help managers and program specialists more quickly identify correspondence that require more immediate action. | #NAME? | #NAME? | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Builder Buddy | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Agentic AI | How can ACF staff more readily develop their own tailored virtual assistants on our enterprise genAI platform? ACF's enterprise generative AI platform gives users the tools to create their own tailored virtual "agents" to assist with more specialized tasks specific to a user's work. Users do not need to code to build these virtual assistants but do have to provide adequate context and instructions. Builder Buddy lets users draft a virtual assistant through describing their needs and providing context in a natural conversational manner as an alternative to a form-based builder interface. | More ACF staff feel empowered and equipped to configure their own tailored virtual assistants, increasing the usefulness of LLMs beyond basic chat Tailored virtual assistants allow ACF staff to leverage LLMs in repeated workflows, reducing administrative burden and allowing staff to focus on higher-order analysis. | Draft virtual assistant configurations that users then further test and iterative on before deployment. | 25/05/2026 | a) Purchased from a vendor | Credal | Yes | Draft virtual assistant configurations that users then further test and iterative on before deployment. | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | |||||||||||||
| Department Of Health And Human Services | HHS/ACF | Document Review for Alignment with Executive Orders: Position Descriptions | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Generative AI | How can ACF identify position descriptions that may need to be adjusted for alignment with recent executive orders? ACF needed to conduct an audit across position descriptions in accordance with HHS Secretarial Directives related to recent executive orders such as Executive Order 14151 "Ending Radical and Wasteful Government DEI Programs and Preferencing" and Executive Order 14168 "Defending Women From Gender Ideology Extremism and Restoring Biological Truth to the Federal Government". | Increased efficiency of review, with reduced administrative burden on staff Staff were able to focus time more effectively by using AI to support the flagging of potential affected position descriptions. | Initial list of position descriptions for further review, validation, and adjustments as applicable by ACF's team. AI was not used to make any final determinations. It was leveraged to more effectively identify position descriptions that may require revision. | 25/03/2026 | c) Developed with both contracting and in-house resources | Palantir | Yes | Initial list of position descriptions for further review, validation, and adjustments as applicable by ACF's team. AI was not used to make any final determinations. It was leveraged to more effectively identify position descriptions that may require revision. | RAG implementation using commercially-available LLMs and user-provided position descriptions | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Document Review for Alignment with Executive Orders: Grant Materials | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Generative AI | How can ACF identify grants that may need to be reviewed for alignment with recent executive orders? ACF needed to conduct an audit across existing grants and new grant applications in accordance with HHS Secretarial Directives related to recent executive orders such as Executive Order 14151 "Ending Radical and Wasteful Government DEI Programs and Preferencing" and Executive Order 14168 "Defending Women From Gender Ideology Extremism and Restoring Biological Truth to the Federal Government". | Increased efficiency of review, with reduced administrative burden on staff To increase the efficiency of the executive order alignment review, ACF is leveraging an AI-based process that reviews application submission files and generates initial flags and priorities for discussion, which are then routed to ACF Program Office staff for final review, justification, and recommendation. | List of grants for program staff to review, with an initial assessment of compliance against executive orders and example passages from the grant materials for flagged grants. Staff are only able to view grants associated with their program office. In addition to the short summary of the results from our AI processing, staff are presented with links to associated grant files to reference while doing their review and making grant compliance assessments. | 25/03/2026 | c) Developed with both contracting and in-house resources | Palantir, Credal | Yes | List of grants for program staff to review, with an initial assessment of compliance against executive orders and example passages from the grant materials for flagged grants. Staff are only able to view grants associated with their program office. In addition to the short summary of the results from our AI processing, staff are presented with links to associated grant files to reference while doing their review and making grant compliance assessments. | RAG implementation using commercially-available LLMs and user-provided grant materials | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Grant management support: structuring information in applications | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Agentic AI | How can ACF staff more efficiently review grant applications that have non-standardized formats? Grant applications for ACF-funded programs can come in different formats, even when a standard set of questions or template is provided. ACF staff evaluating applications look for explanation and details to assess against pre-established evaluation criteria. Going back-and-forth across an application to find the relevant explanation can be time intensive, especially for applications that include multiple documents spanning 50+ pages. | Increased efficiency of review so that more time can be spent on grant application analysis and evaluation | Varies, depending on the program office. Outputs generally involve summarizing information, extracting key information into a specific format, flagging potential gaps or inconsistencies, and providing citations / page numbers to support follow-up review and validation. In all cases, AI is only used to support review of grant applications but does not make any final determinations for awards. | 25/07/2026 | c) Developed with both contracting and in-house resources | Palantir, Credal | Yes | Varies, depending on the program office. Outputs generally involve summarizing information, extracting key information into a specific format, flagging potential gaps or inconsistencies, and providing citations / page numbers to support follow-up review and validation. In all cases, AI is only used to support review of grant applications but does not make any final determinations for awards. | RAG implementation using commercially-available LLMs and user-provided grant applications | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Acquisition support: co-drafting acquisition packages | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Generative AI | How can ACF acquisition teams more efficiently draft acquisition packages? In addition to tailored performance work statements (PWS), acquisition packages include multiple documents that often require information based on the PWS. There are also instances where a recompete is issued that largely follows a previous contract, with some updates to volumes. Different contract awarding agencies have different formats. ACF's acquisition teams therefore commonly need to repackage information. | Increased efficiency in preparing acquisition packages so that more time is spent on the substance of scoping contracts and less time on rote drafting | Draft language for various parts of an acquisition package based on user-provided context and direction. For instance, based on a provided set of task narratives, a user may ask a large language model to draft the table of deliverables. Based on a draft set of requirements, a user may ask the large language model to provide an initial suggestion for organizing tasks. Based on a copy of a previous modification memo and an executed contract, a user may ask a large language model to draft a new modification memo to exercise the next option year. | 24/12/2026 | c) Developed with both contracting and in-house resources | Credal, Ask Sage, Microsoft | Yes | Draft language for various parts of an acquisition package based on user-provided context and direction. For instance, based on a provided set of task narratives, a user may ask a large language model to draft the table of deliverables. Based on a draft set of requirements, a user may ask the large language model to provide an initial suggestion for organizing tasks. Based on a copy of a previous modification memo and an executed contract, a user may ask a large language model to draft a new modification memo to exercise the next option year. | RAG implementation using commercially-available LLMs and user-provided context on acquisition needs | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Acquisition support: assisting reviews and co-drafting technical evaluation documents | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Generative AI | How can ACF teams more efficiently review contract proposals and summarize technical evaluation discussions? In response to Requests for Information and Requests for Proposals, ACF teams receive responses from interested vendors. In the review process, ACF staff need to provide summarized comments for each response on potential suitability for delivering the work. These summaries are based on individual and group review against pre-established criteria. When there is a high volume of vendor responses, review teams have many summaries to write. | Increased efficiency in drafting technical evaluation documents, so that more time is spent on review and analysis and less time is spent on "blank screen syndrome" | Draft language for technical evaluation documents, based on user-provided context, direction, analysis, and examples. For example, the user may provide a statement on why they assess a proposal to be unsuitable based on the evaluation criteria, and then leverage the AI tool to draft language to pull and format specific examples with page citations from the proposal. AI is only used to draft language and make it easier to find relevant passages in proposal materials. AI is not used to make final determinations. The technical evaluators review and revise as needed all AI-drafted language, verifying accuracy of any cited excerpts. | 25/07/2026 | c) Developed with both contracting and in-house resources | Credal, Ask Sage, Microsoft | Yes | Draft language for technical evaluation documents, based on user-provided context, direction, analysis, and examples. For example, the user may provide a statement on why they assess a proposal to be unsuitable based on the evaluation criteria, and then leverage the AI tool to draft language to pull and format specific examples with page citations from the proposal. AI is only used to draft language and make it easier to find relevant passages in proposal materials. AI is not used to make final determinations. The technical evaluators review and revise as needed all AI-drafted language, verifying accuracy of any cited excerpts. | RAG implementation using commercially-available LLMs and received capability statements | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/AHRQ/CQuIPS | AHRQ Search | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Organization wide search that includes Relevancy Tailoring, Auto-generation Synonyms, Automated Suggestions, Suggested Related Content, Auto Tagging, and "Did you mean?" to allow visitors to find specific content. This AI use case enhances our agency's efficiency and user experience by optimizing search results, auto-completing queries, suggesting relevant searches and content tags, as well as proposing spelling corrections. | This AI use case aims to optimize search results by adjusting their ranking, ensuring that the most pertinent information is displayed at the top. It also enhances search effectiveness by adding synonyms to queries behind the scenes. It further improves user experience by auto-completing queries as they are being typed, and showing related searches that might offer additional valuable insights. The system also proposes content tags automatically, leveraging machine learning to assess existing content tagging patterns. Additionally, it suggests spelling corrections and reformats search queries based on data from Google Analytics. | This AI use case aims to optimize search results by adjusting their ranking, ensuring that the most pertinent information is displayed at the top. It also enhances search effectiveness by adding synonyms to queries behind the scenes. It further improves user experience by auto-completing queries as they are being typed, and showing related searches that might offer additional valuable insights. The system also proposes content tags automatically, leveraging machine learning to assess existing content tagging patterns. Additionally, it suggests spelling corrections and reformats search queries based on data from Google Analytics. | 19/09/2026 | c) Developed with both contracting and in-house resources | RIVA Solutions | Yes | This AI use case aims to optimize search results by adjusting their ranking, ensuring that the most pertinent information is displayed at the top. It also enhances search effectiveness by adding synonyms to queries behind the scenes. It further improves user experience by auto-completing queries as they are being typed, and showing related searches that might offer additional valuable insights. The system also proposes content tags automatically, leveraging machine learning to assess existing content tagging patterns. Additionally, it suggests spelling corrections and reformats search queries based on data from Google Analytics. | Website Data | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/AHRQ/CQuIPS | Chatbot | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/AHRQ/CQuIPS | Chatbot | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/AHRQ/OEREP | Enhancing Diversity in Peer Review - Pilot | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/AHRQ/CQuIPS | NLQuery- As Data or Pulse | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/AHRQ/CQuIPS | Quality and Safety Review System AI-enabled automated abstraction | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/AHRQ/CFACT | AI DevOps - Improving Development and CI/CD Operations | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Introducing AI to DevOps can identify and reduce errors, shorten release cycles, and empower development teams with data-driven insights, resulting in faster continuous integration and shorter development lifecycles. | Integrating AI into DevOps pipeline can boost efficiency, enhance code quality, and accelerate development cycle. AI code review uses artificial intelligence algorithms to analyze source code for potential issues. Initial integration of AI code review can assist in detecting bugs, security vulnerabilities, performance bottlenecks, and deviations from coding standards. | Integrating AI into DevOps pipeline can boost efficiency, enhance code quality, and accelerate development cycle. AI code review uses artificial intelligence algorithms to analyze source code for potential issues. Initial integration of AI code review can assist in detecting bugs, security vulnerabilities, performance bottlenecks, and deviations from coding standards. | Pingwind | Integrating AI into DevOps pipeline can boost efficiency, enhance code quality, and accelerate development cycle. AI code review uses artificial intelligence algorithms to analyze source code for potential issues. Initial integration of AI code review can assist in detecting bugs, security vulnerabilities, performance bottlenecks, and deviations from coding standards. | |||||||||||||||||||||
| Department Of Health And Human Services | HHS/AHRQ/CEPI | USPSTF Public Forms Data AI Integration | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ASPR/ODAIA | emPOWER AI | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Public health authorities, first responders and others across the emergency management spectrum indicated that they had to be able to more rapidly access HHS emPOWER Map publicly available data, particularly in unstable internet conditions, during disasters. emPOWER AI, an Amazon Alexa Skill, was created to allow anyone with a smartphone, particularly emergency and first responders, to be able to request the data from the HHS emPOWER Map and receive it within seconds from the field to headquarters. | Public health authorities, first responders and others across the emergency management spectrum indicated that they had to be able to more rapidly access HHS emPOWER Map publicly available data, particularly in unstable internet conditions, during disasters. emPOWER AI, an Amazon Alexa Skill, was created to allow anyone with a smartphone, particularly emergency and first responders, to be able to request the data from the HHS emPOWER Map and receive it within seconds from the field to headquarters. For example, a local first responder in the field at a location of a disaster could rapidly identify the total number of Medicare beneficiaries that live independently in a given zip code, and may be adversely impacted by a rapidly progressing flood or wildfire emergency and use this information to inform decision-making on evacuation assistance resources and teams. | emPOWER AI gives the user publicly available data from the HHS emPOWER Map on the number of electricity-dependent Medicare beneficiaries at the national, state, territory, county, and ZIP Code levels. | 19/12/2026 | c) Developed with both contracting and in-house resources | Communications Training & Analysis Corporation (CTAC) | Yes | emPOWER AI gives the user publicly available data from the HHS emPOWER Map on the number of electricity-dependent Medicare beneficiaries at the national, state, territory, county, and ZIP Code levels. | Publicly available de-identified data on the HHS emPOWER Map | No | Publicly available data, PIA is under the HHS emPOWER Map ATO. | k) None of the above | No | No | Publicly available data, PIA is under the HHS emPOWER Map ATO. | |||||||||||
| Department Of Health And Human Services | HHS/ASPR/CP | Senior Leadership Briefing Generation | a) Pre-deployment The use case is in a development or acquisition status. | Emergency Management | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The manual process of parsing and understanding documents, and summarizing their content | Saves the SLB team time manually typing out a briefing | Automatically generates a Senior Leadership Briefing based on the user inputted requirements and documents | Automatically generates a Senior Leadership Briefing based on the user inputted requirements and documents | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ASPR/CP | AIP Cyber Incident Ingestion | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Cybersecurity | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The manual process of parsing out cyber incidents and entering them into the system | Allows the Cyber team to quickly ingest cyber incident data | Automatically parses and creates cyber incidents based on the user inputted description or email | 25/11/2026 | b) Developed in-house | Palantir | Yes | Automatically parses and creates cyber incidents based on the user inputted description or email | ASPR Cyber Incident Descriptions | No | N/A | k) None of the above | Yes | N/A | ||||||||||||
| Department Of Health And Human Services | HHS/ASPR/CP | ASPR TRACIE - Web search results improvement (prototyping stage) | a) Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Web search results improvement - ASPR TRACIE | Public users searching on the website can expect to see more relevant and improved search results. | The output will bring more relevant search results by improving current search results. It will use both keywords and natural language to bring more relevant results. | The output will bring more relevant search results by improving current search results. It will use both keywords and natural language to bring more relevant results. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ASFR/OA | PRISM Ally | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Generative AI | Assists users with federal regulatory and agency policy questions related to acquisition | A.I. will help HHS deliver faster, higher-quality public services and measurably improve mission outcomes by cutting cycle times and backlogs, boosting accuracy, and increasing first-contact resolution and customer satisfaction. It will drive cost avoidance and productivity through automation and reuse of shared data/models and code, lowering unit costs per transaction while protecting taxpayer dollars. Built-in accessibility, interpretability, and human-in-the-loop safeguards strengthen equity, fairness, and public trust, with transparent citations, monitoring, and appeal mechanisms. The workforce benefits from targeted upskilling and copilots that reduce manual research and documentation, improving time-to-competency and decision quality. Data quality and interoperability improve via standardized metadata, provenance, and sharingenabling secure, portable, and interoperable solutions that reduce vendor lock-in and long-term risk. Success will be tracked with concrete metrics such as cycle-time reduction, error-rate and rework decreases, customer experience score gains, cost-per-action savings, accessibility conformance, reuse/adoption counts, training completions, and compliance/incident rates. | Ally utilizes a Retrieval-Augmented Generation (RAG) approach to develop answers to user submitted questions. The user submits a query and any prompt instruction needed through the PRISM Ally user interface. Using the query, PRISM Ally performs a vector search of its private knowledge repository to identify relevant information that can provide enhanced context for developing the answer. The user query, prompt, and enhanced context are then passed to the LLM. The LLM considers the information and returns an answer. The utilization of enhanced context provides guardrails for the LLM and helps to increase the accuracy of the answers provided. | 25/06/2026 | a) Purchased from a vendor | Unison | No | Ally utilizes a Retrieval-Augmented Generation (RAG) approach to develop answers to user submitted questions. The user submits a query and any prompt instruction needed through the PRISM Ally user interface. Using the query, PRISM Ally performs a vector search of its private knowledge repository to identify relevant information that can provide enhanced context for developing the answer. The user query, prompt, and enhanced context are then passed to the LLM. The LLM considers the information and returns an answer. The utilization of enhanced context provides guardrails for the LLM and helps to increase the accuracy of the answers provided. | The PRISM Ally application and private knowledge repository are located within the Unison Cloud, in a FedRAMP moderate environment. The application and repository are maintained by Unison. Regulatory content (e.g. FAR, DFARS, agency supplementals) within the repository is sourced from government authenticated sources (e.g. acquisition.gov, ecfr.gov). Unison updates the regulatory content within the repository with each new regulatory update release. | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/ASTP | Certification and Testing/Program Administration AI-enabled Internal Processes | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | ONCs certification and related program operations rely on many manual, text-heavy, and repetitive tasks (e.g., analyzing surveillance reports, validating certification test results, generating public meeting materials, preparing communications, managing Jira tickets, summarizing standards/IG guidance, and drafting acquisition documents). These activities are time-consuming, error-prone, and difficult to scale as program workload increases. The AI use case is intended to automate or semi-automate routine document drafting, data summarization, basic analysis, and information retrieval across these functions so staff can focus on higher-value review, oversight, and decision-making. | Expected benefits include: (1) improved efficiency of internal certification and program operations (e.g., faster preparation of surveillance analyses, CHPL artifacts, and meeting materials); (2) reduced risk of omissions and inconsistencies in internal documents through standardized AI-assisted drafting and terminology checks; (3) quicker access to relevant information from CHPL data, Jira tickets, standards implementation guides, and financial spreadsheets; and (4) more timely, clear, and consistent public-facing communications and policy/support documents. Indirectly, these improvements support ONCs mission to advance safe, interoperable health IT by improving the quality and timeliness of its certification, oversight, and communication activities. | AI-generated or AI-assisted outputs include: (1) draft analytical summaries and reports (e.g., surveillance reporting analysis, RWT results validation, SED categorization, data visualizations); (2) draft public-facing and stakeholder communications (e.g., webinar Q&As, plain-language explanations of regulatory or standards text, communication templates); (3) internal operational artifacts (e.g., CHPL backups, release notes, Jira responses and ticket summaries, Excel query results); and (4) first drafts of acquisition and planning documents (e.g., statements of work, market research, memoranda of need, acquisition plans). All outputs are reviewed, edited, and approved by ONC staff before use. | AI-generated or AI-assisted outputs include: (1) draft analytical summaries and reports (e.g., surveillance reporting analysis, RWT results validation, SED categorization, data visualizations); (2) draft public-facing and stakeholder communications (e.g., webinar Q&As, plain-language explanations of regulatory or standards text, communication templates); (3) internal operational artifacts (e.g., CHPL backups, release notes, Jira responses and ticket summaries, Excel query results); and (4) first drafts of acquisition and planning documents (e.g., statements of work, market research, memoranda of need, acquisition plans). All outputs are reviewed, edited, and approved by ONC staff before use. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ASTP | The NEF DSI/AI project addresses local validation of AI-based clinical decision support (CDS)/decision support interventions (DSI) in provider settings. | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Other | Assess quality of AI-based clinical decision support (CDS) tools | [LAVA, or "Local AI Evaluator", does not itself use AI.] LAVA would assist clinicians in assessing the accuracy, and therefore usefulness, of AI diagnosis tools. AI diagnosis tools are developed to apply to a national population, rather than to smaller, local populations, such as those served by small providers with one or few physical locations. These smaller, local patient populations may have different demographics than that on which the AI-based tool was trained, so the LAVA tool can help illuminate these differences and how the AI tool may apply to the local population. This can help providers learn how to best use their AI diagnosis tools. | Outputs are not generated by AI, but rather use open source information to assess outputs from other AI-based tools. This tool's outputs are metrics that measure, for example, accuracy and precision of external AI predictions of disease onsets in local patient populations. | Outputs are not generated by AI, but rather use open source information to assess outputs from other AI-based tools. This tool's outputs are metrics that measure, for example, accuracy and precision of external AI predictions of disease onsets in local patient populations. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | HaMLET: Harnessing Machine Learning to Eliminate Tuberculosis | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Adverse Childhood Experiences (ACEs) Literature Review Dashboard | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Automated Analysis of Injury Control Research Center (ICRC) Annual Progress Reports (APRs) using Large Language Models | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is designed to streamline the review process of Annual Progress Reports (APRs) submitted by Injury Control Research Centers (ICRCs), improve efficiency, and support the evaluation of the performance and progress of ICRC-funded activities. | The AI will help quickly and efficiently identify key challenges and insights from ICRC APRs, enabling more effective decision-making in the review process. By automating the extraction and analysis of critical information, the AI allows the ICRC team to focus on higher-level evaluation and strategic planning. This will reduce the time and resources needed for manual review, improve the consistency and accuracy of assessments, and facilitate faster responses to ICRC needs. Ultimately, this will support ICRCs in overcoming challenges and achieving their research and injury control goals, benefiting the public health system as a whole. | The AI analyzes the textual content of APRs, focusing initially on sections detailing the challenges faced by ICRCs. It identifies key themes, trends, and critical information that may require further attention. The AI methodology extracts insights and patterns from the data, which can then be compared with manual qualitative analysis outcomes. In subsequent stages, the AI will be expanded to analyze other sections of the APRs, such as progress toward goals and program impact. | 23/08/2026 | b) Developed in-house | Yes | The AI analyzes the textual content of APRs, focusing initially on sections detailing the challenges faced by ICRCs. It identifies key themes, trends, and critical information that may require further attention. The AI methodology extracts insights and patterns from the data, which can then be compared with manual qualitative analysis outcomes. In subsequent stages, the AI will be expanded to analyze other sections of the APRs, such as progress toward goals and program impact. | Injury Control Research Center Annual Progress Reports | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Detecting Stimulant and Opioid Misuse and Illicit Use | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To detect and analyze non-therapeutic (illicit or misuse) stimulant and opioid use from free-text clinical notes in EHRs, which is not possible using standard medical codes. | The AI models enable the extraction of novel insights from EHRs regarding non-therapeutic drug use, improving the statistical analysis of health data for the National Hospital Care Survey (NHCS). This supports more accurate public health statistics and may influence analysis of other datasets with EHR clinical notes. | Two machine learning models (one for internal use, one for public release) that, together with rule-based text analysis, determine whether a patient has used a drug therapeutically or non-therapeutically, providing new insights for health statistics. | 24/03/2026 | b) Developed in-house | No | Two machine learning models (one for internal use, one for public release) that, together with rule-based text analysis, determine whether a patient has used a drug therapeutically or non-therapeutically, providing new insights for health statistics. | National Hospital Care Survey 2020 clinical notes | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | DHP Virtual Assistant | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To assist HIV researchers by retrieving relevant information related to HIV and HIV research, improving productivity and efficiency in literature review. | Expected to increase productivity among HIV researchers by streamlining information retrieval for HIV research. | The AI assistant uses retrieval augmented generation (RAG) to return information related to HIV based on a user's query and other documents. | 24/11/2026 | b) Developed in-house | Yes | The AI assistant uses retrieval augmented generation (RAG) to return information related to HIV based on a user's query and other documents. | Internal documentation containing mapped eHARS LOINC codes, IQVIA data dictionaries, APRs | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Fuzzy matching tool | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To improve clearance and review management of CDC publications by identifying similar records between eClearance submissions and CDC-authored publications, ensuring compliance with NIHMS and CDC public access policies, and supporting science prioritization and impact analyses. | The tool is expected to streamline the clearance process, ensure compliance with public access policies, and assist in identifying and prioritizing CDC-authored publications. This will save staff time and improve the efficiency and accuracy of publication management. | The tool outputs matched records between eClearance submissions and CDC-authored publications, identifying potential duplicates or related documents for internal review. | 23/03/2026 | b) Developed in-house | No | The tool outputs matched records between eClearance submissions and CDC-authored publications, identifying potential duplicates or related documents for internal review. | eClearance submissions data and Science Clips data | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Reviewing Global Influenza Vaccine Literature | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To efficiently review a large volume of published literature and identify abstracts related to access to influenza vaccines. | The AI system is expected to save significant staff time by automating the initial literature review process, allowing epidemiologists to focus on in-depth analysis of relevant publications. This increases efficiency and scalability in reviewing global literature related to vaccine access. | A list of abstracts from published journal articles that are relevant to vaccine access. These abstracts are identified using large language models and are then reviewed manually for further analysis. | 24/10/2026 | b) Developed in-house | Yes | A list of abstracts from published journal articles that are relevant to vaccine access. These abstracts are identified using large language models and are then reviewed manually for further analysis. | Published journal articles accessed through freely available sources or via CDC research agreements. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | HIV Data Virtual Assistant | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To improve the data retrieval and automation process for HIV researchers by providing information from datasets and generating code for data analysis, thereby alleviating challenges in finding appropriate HIV-related data. | The AI assistant will improve productivity and research efforts by streamlining the process of finding relevant HIV datasets and generating analysis code, saving researchers time and enabling more efficient data-driven research. | The AI system uses retrieval augmented generation (RAG) to return information related to HIV based on user queries. Outputs include lists of datasets, associated variable names, and code (SAS, R, Python) for analysis, as well as specific dataset information based on researcher queries. | The AI system uses retrieval augmented generation (RAG) to return information related to HIV based on user queries. Outputs include lists of datasets, associated variable names, and code (SAS, R, Python) for analysis, as well as specific dataset information based on researcher queries. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Identify infrastructure supports for physical activity (e.g. sidewalks) in satellite and roadway images | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Computer Vision | To streamline and automate the surveillance of sidewalks and other infrastructure that support physical activity, reducing the labor and cost associated with manual inspection. | The technology has the potential to significantly minimize the effort required for cataloging sidewalks and related infrastructure, which are important for promoting physical activity. This could lead to more efficient and cost-effective surveillance, supporting public health monitoring and interventions. | Outputs include geocoded data tables, maps, GIS layers, or summary reports identifying sidewalks, bicycle lanes, and other relevant infrastructure from satellite and roadway images. | 23/09/2026 | c) Developed with both contracting and in-house resources | No | Outputs include geocoded data tables, maps, GIS layers, or summary reports identifying sidewalks, bicycle lanes, and other relevant infrastructure from satellite and roadway images. | Publicly-available images were used for model evaluation | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Immunization Information Systems Guidance Documentation Navigation and Management (IDAB EDAV Azure OpenAI Technology Use) | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | To provide a modernized, efficient, and user-friendly method for CDC staff to retrieve, interact with, and update Immunization Information Systems (IIS) guidance documentation, improving knowledge retrieval and supporting the creation and validation of new guidance documents. | This AI solution enables faster, more actionable access to IIS guidance for subject matter experts, helps new employees find information more easily, and improves understanding of best practices. It streamlines the process of drafting, refining, and validating new guidance documents, increasing efficiency and accuracy in knowledge management. | The AI system provides synthesized answers to user queries in a Q&A interface, retrieving and summarizing information from publicly available IIS guidance documents. Outputs include generated text responses, draft guidance documents, and updated documentation. | The AI system provides synthesized answers to user queries in a Q&A interface, retrieving and summarizing information from publicly available IIS guidance documents. Outputs include generated text responses, draft guidance documents, and updated documentation. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | LaserAI | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To reduce screening time and improve accuracy in all phases of systematic reviews by automating title and abstract screening, PDF retrieval, full-text review, and data extraction. | The AI is expected to streamline systematic review processes, reduce manual effort, and improve accuracy in identifying and extracting relevant data. The synthesized and graded data will inform the development of evidence-based infection prevention and control recommendations for healthcare settings. | The AI system outputs include prioritized lists of potentially relevant articles for screening, retrieved PDFs from PubMed, and suggested data for extraction from PDFs. | 24/04/2026 | a) Purchased from a vendor | LaserAI | No | The AI system outputs include prioritized lists of potentially relevant articles for screening, retrieved PDFs from PubMed, and suggested data for extraction from PDFs. | None: Any data used to train the AI is publicly available, peer-reviewed data. | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | NewsScape | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Emergency Management | Pilot | c) Not high-impact | Not high-impact | Generative AI | Development of early warning indicators. News articles can form an early warning indicator of public health events and other pieces of information which can be utilized across all major domains of public health. Because of the quantity of news articles, manual efforts are impossible to gather this information. | There are a variety of endpoints that this AI output could help support. Various teams across the CDC have expressed interest in being able to quickly get the right news content, with summaries of those articles to help support outbreak detection, report generation, surveillance and monitoring of pathogen specific news, etc. AI lets us efficiently filter and summarize thousands of news articles a day into a handful of daily "news events" that user can glean information from. | NewsScape is a AI enabled news aggregation and summarization that is hosted within the 1CDP platform. The main motivation to build NewsScape was to develop a system that uses Large Language Models (LLMs) to surface relevant insight from recent news articles. NewsScape ingests a high volume of news articles, in the order of thousands of news articles everyday, and surfaces the information from them that related to the topic we were interested (for example, pathogen related news articles, or U.S. medical supply chain updates). The tool can be customized based on specific program office needs, and can be deployed independently of one another so that one program office can have their own custom version of NewsScape installed. | 23/01/2026 | c) Developed with both contracting and in-house resources | Palantir Technologies | Yes | NewsScape is a AI enabled news aggregation and summarization that is hosted within the 1CDP platform. The main motivation to build NewsScape was to develop a system that uses Large Language Models (LLMs) to surface relevant insight from recent news articles. NewsScape ingests a high volume of news articles, in the order of thousands of news articles everyday, and surfaces the information from them that related to the topic we were interested (for example, pathogen related news articles, or U.S. medical supply chain updates). The tool can be customized based on specific program office needs, and can be deployed independently of one another so that one program office can have their own custom version of NewsScape installed. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Portfolio Analytics | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To automate the identification of themes within CDC-authored publications, providing richer information for science prioritization, evaluation, and communication. | Automated theme identification will provide centers and divisions with richer, more actionable information about their publications. Combined with impact metrics, this will aid in science prioritization, evaluation, and communication, supporting more effective and efficient scientific resource allocation. | The system outputs themes or topic clusters identified within CDC-authored publications. | 24/03/2026 | b) Developed in-house | No | The system outputs themes or topic clusters identified within CDC-authored publications. | eClearance submissions data and Science Clips data | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | RAPID Analysis of Policy and Program Documents (RAPID) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To streamline and automate the review and evaluation of policy and program documents, saving staff time, reducing the need for specialized training, providing consistent and complete answers, and enabling easy validation and collaboration. | The web application will save staff time, expand capacity, provide consistent and complete answers to policy surveillance and evaluation questions, reduce intra-rater variability, and enable easy validation and collaboration on policy projects. | An internal web application that allows users to import, store, search, and analyze policy or program documents; ask questions of relevant text segments; validate answers; and collaborate on projects. Outputs include plain language answers, binary codes or scores, and project-specific databases. | 25/09/2026 | c) Developed with both contracting and in-house resources | Yes | An internal web application that allows users to import, store, search, and analyze policy or program documents; ask questions of relevant text segments; validate answers; and collaborate on projects. Outputs include plain language answers, binary codes or scores, and project-specific databases. | RAPID analysis of DNPAO policy and program data using GPT results in project-specific databases. AI-generated data are compared to CDC manual reviews by SMEs for accuracy and reliability. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | SewerScout: Automated on-site sewage facility detection from aerial imagery to identify failed systems | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | To automate the identification of onsite wastewater systems and detect failed systems using aerial imagery, enabling public health departments to efficiently locate and assess septic systems without resource-intensive field visits. | The project will allow state, tribal, local, and territorial public health departments to more easily identify failing septic systems, address contamination risks, and improve disaster response by providing a ready catalog of systems. This will save time and resources compared to manual surveys, especially in rural and remote areas. | The system outputs include identification and mapping of onsite sewage facilities, with the intent to distinguish between functional and failed systems, supporting public health surveillance and intervention. | The system outputs include identification and mapping of onsite sewage facilities, with the intent to distinguish between functional and failed systems, supporting public health surveillance and intervention. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Sidekick Comms bot Offering User-friendly Tips (SCOUT) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To reduce the workload of health communicators by automating and simplifying the creation of web pages, social media posts, and other public-facing content, making information more accessible and understandable for the general public. | The solution accelerates content creation, reduces staff burden, and increases the accessibility and clarity of CDC information for the public. All AI-generated content is reviewed by experts to ensure accuracy and quality, supporting the CDCs mission to provide clear, science-based public health information. | The AI system generates plain language versions of existing web content, creates new content for web, social media, fact sheets, and graphics, and produces social media posts. All outputs are reviewed and edited by CDC experts before publication. | 25/01/2026 | c) Developed with both contracting and in-house resources | Yes | The AI system generates plain language versions of existing web content, creates new content for web, social media, fact sheets, and graphics, and produces social media posts. All outputs are reviewed and edited by CDC experts before publication. | The use case focuses on generating content from existing, publicly available CDC materials. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Transcribing Cognitive Interviews with Whisper | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Reducing the time and effort required to transcribe cognitive interviews for federal health survey research, enabling faster and higher-quality analysis of qualitative data. | The AI system is expected to significantly reduce the hours required for qualitative review by automating transcription, enabling immediate comparison of interview concepts and answers, and providing timestamps for easier reference. This will accelerate research publication and improve the quality of survey questions used in federal surveys. | The AI generates transcripts from recorded interviews, which are used by staff for qualitative research in support of federal health survey research. | 24/07/2026 | b) Developed in-house | No | The AI generates transcripts from recorded interviews, which are used by staff for qualitative research in support of federal health survey research. | No agency-owned data was used; publicly available data was used to evaluate performance. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Use of Natural Language Processing for Topic Modeling to Automate Review of Public Comments to Notice of Proposed Rulemaking | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Manual review of large volumes of public comments for Notices of Proposed Rulemaking is labor-intensive and time-consuming. The AI is intended to organize and cluster comments by theme, improving the efficiency and effectiveness of manual review and ensuring all topics are accurately reported. | The AI system will enhance the speed and quality of manual review of public comments, enable better thematic organization, and reduce the burden on staff. This supports compliance with legal requirements for public comment review and improves the insights gained from public input. | The AI generates clusters of similar public comments, organized by theme, to aid in manual review. | 23/04/2026 | b) Developed in-house | Yes | The AI generates clusters of similar public comments, organized by theme, to aid in manual review. | Not specified | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Short-term Forecasting of Severe Outcomes for Seasonal and Epidemic Pathogens | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Emergency Management | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Predict severe disease outcomes - such as emergency department visits or hospital admissions - over short time horizons (1-4 weeks) to improve situational awareness for planning and decision-making at the national, state, and local level. Traditional AI/ML models (e.g. time series models) are mainly used as baselines against which to test and improve more sophisticated modeling methods. | Providing timely, accurate, and actionable forward-looking information on severe disease outcomes to government officials and the public. | Current outputs include weekly state and national hospital admissions forecasts for COVID-19 and influenza (public-facing) and weekly state and national ED visit forecasts for COVID-19 and influenza (internal to CDC at this time). | 23/10/2026 | c) Developed with both contracting and in-house resources | Yes | Current outputs include weekly state and national hospital admissions forecasts for COVID-19 and influenza (public-facing) and weekly state and national ED visit forecasts for COVID-19 and influenza (internal to CDC at this time). | Internal and publicly available hospital admissions data collected through the National Hospital Safety Network (NSHN), internal and publicly available emergency department visit data collected through the National Syndromic Surveillance Program (NSSP), internal wastewater concentration data collected through the National Wastewater Surveillance System (NWSS) | No | k) None of the above | Yes | CFA's signal fusion modeling framework: https://github.com/CDCgov/pyrenew; CFA's renewal model implementation: https://github.com/CDCgov/pyrenew-hew; CFA-run COVID-19 Forecasting Hub: https://github.com/CDCgov/covid19-forecast-hub | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | CDC Chatbot - Enterprise Data Assistant | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | This tool is a general purpose assistant for CDC staff to ask questions of large sets of documents. Staff may take hours to find the relevant information, and this tool enables fast access powered by Retrieval Augmented Generation (RAG). | Staff have increased access to relevant information and documents through faster and easier knowledge management. | The system generates responses to staff questions based upon the provided available information. This includes citations and references to sections of available documents for staff to further explore. | 24/02/2026 | c) Developed with both contracting and in-house resources | Yes | The system generates responses to staff questions based upon the provided available information. This includes citations and references to sections of available documents for staff to further explore. | Documentation, standard operating procedures, or other materials supplied by staff may be used as content for the RAG Model here. Examples include the documentation from our Enterprise Data, Analytics, and Visualization platform explaining the available tools, products, and other features. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Leveraging AI for Metadata Tagging for Enterprise Data Catalog of CDC | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Manual metadata tagging for datasets in the CDC Enterprise Data Catalog is inconsistent, incomplete, and time-consuming. The AI automates and standardizes metadata tagging, improving catalog usability and reducing manual effort. | The AI increases the speed and consistency of metadata tagging, making the data catalog more usable for CDC staff. This reduces manual effort, improves the completeness and standardization of metadata, and helps staff more efficiently find and use relevant datasets. | The AI generates suggested metadata fields (tags) for each dataset based on existing metadata, which are then used by staff to improve dataset discovery and relevance in the enterprise data catalog. | 24/06/2026 | b) Developed in-house | Yes | The AI generates suggested metadata fields (tags) for each dataset based on existing metadata, which are then used by staff to improve dataset discovery and relevance in the enterprise data catalog. | Enterprise Data Catalog metadata fields | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Malaria parasites DNA barcode geography classification | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | To complement epidemiologic investigations of domestic malaria cases by determining the geographic origin of malaria parasite strains, helping to understand how the strain entered the US. | This AI supports epidemiological investigations by providing rapid, automated classification of malaria parasite genotypes to geographic origins. This enhances the ability to track and respond to malaria cases, especially those domestically acquired, and supports public health interventions. For more information, see the manuscript:?https://journals.asm.org/doi/full/10.1128/aac.01203-24 | The AI examines a sequence barcode/genotype and assigns the malaria parasite genotype to a geographic origin (e.g., continent or subregion). | 23/07/2026 | b) Developed in-house | Yes | The AI examines a sequence barcode/genotype and assigns the malaria parasite genotype to a geographic origin (e.g., continent or subregion). | Data used are a mixture of data generated at CDC and other data available publicly. CDC data: https://www.ncbi.nlm.nih.gov/bioproject/PRJNA428490/ https://www.ncbi.nlm.nih.gov/bioproject/PRJNA1092573/ https://www.ncbi.nlm.nih.gov/bioproject/?term=PRJNA1110244 Non-CDC data: https://apps.malariagen.net/apps/pf7/ Travel histories from case patients were used to assess model performance (see manuscript:?https://journals.asm.org/doi/full/10.1128/aac.01203-24) | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | The School Closure Awareness System | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Emergency Management | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To efficiently and accurately identify and categorize unplanned school closures across the U.S. using publicly available social media data, replacing a costly and labor-intensive manual process. | The AI system has saved nearly $2 million in contracting fees and reduced human work hours by 200 hours. It enables faster, more comprehensive, and more detailed capture of unplanned school closure data than the previous manual process, supporting CDCs emergency response and reporting obligations. | The system processes Facebook posts from about 40,000 school or district accounts, using a large language model to categorize posts as unplanned school closures (by event type: weather, health, facility, safety) and denote status changes (full closure, virtual, hybrid, early/late dismissal). Outputs are reviewed and recoded by staff every 24 hours. | 22/11/2026 | b) Developed in-house | Yes | The system processes Facebook posts from about 40,000 school or district accounts, using a large language model to categorize posts as unplanned school closures (by event type: weather, health, facility, safety) and denote status changes (full closure, virtual, hybrid, early/late dismissal). Outputs are reviewed and recoded by staff every 24 hours. | Publicly available Facebook posts from approximately 40,000 school or district accounts. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Using Generative AI for Stance Analysis of Public Comments on CDCs Proposed Rules | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Manual review of public comments for rulemaking is labor-intensive and time-consuming due to the volume and diversity of responses. The AI system automates stance analysis and topic modeling to improve efficiency, accuracy, and insight in the review process. | The AI system can save significant time for CDCs public policy experts by automating the categorization and stance analysis of public comments, enabling faster and more comprehensive insight gathering for regulatory analysis. This supports compliance with legal requirements and improves the quality of public policy review. | The system uses generative AI to analyze public comments, providing outputs such as stance (support/oppose/neutral), topics, and sentiment for each comment. These outputs aid regulatory analysts in reviewing and summarizing public feedback. | 23/07/2026 | b) Developed in-house | No | The system uses generative AI to analyze public comments, providing outputs such as stance (support/oppose/neutral), topics, and sentiment for each comment. These outputs aid regulatory analysts in reviewing and summarizing public feedback. | Public comments submitted in response to CDCs proposed rules. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | A reusable NLP pipeline for clinical narratives preprocessing and characterization | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Autocoding to Support Adverse Drug Event Surveillance | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Manual coding of adverse drug event reports is time-consuming and slows down the production of prevalence estimates. The AI model will automate and speed up the coding process for surveillance epidemiologists. | The AI model will help epidemiologists quickly determine whether reported adverse drug events meet surveillance case definitions, speeding up the coding process and enabling faster, more accurate prevalence estimates for the surveillance system. | The model takes a de-identified free-text description of a patient's emergency department visit, along with other pre-coded variables, and outputs the probability that the encounter meets the surveillance case definition for an adverse drug event. | 24/05/2026 | b) Developed in-house | No | The model takes a de-identified free-text description of a patient's emergency department visit, along with other pre-coded variables, and outputs the probability that the encounter meets the surveillance case definition for an adverse drug event. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Automating LIMS Bioinformatics Workflow Configuration and Enhancing Lab Quality Management with AI | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | Configuring and customizing bioinformatics workflows in Clarity LIMS is time-consuming and requires specialized expertise. The AI tool automates this process, enabling rapid deployment and lowering the barrier for laboratory staff. It also improves access to quality management and regulatory documentation. | The system can reduce the time required to configure Clarity LIMS workflows, enable rapid deployment during outbreaks, lower the expertise needed for workflow customization, and enhance team learning and training by providing easy access to relevant documentation and best practices. | The AI system converts natural language lab protocols into precise XML workflows compatible with Clarity LIMS and serves as an interactive knowledge base for laboratory quality management and regulatory documentation. | 24/01/2026 | b) Developed in-house | No | The AI system converts natural language lab protocols into precise XML workflows compatible with Clarity LIMS and serves as an interactive knowledge base for laboratory quality management and regulatory documentation. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | DGMH AI Chatbot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | Responding to inquiries is time-consuming for staff, especially when information is available but hard to find on public CDC webpages. The AI chatbot drafts initial responses using content from CDCs website, reducing turnaround time and freeing staff to focus on higher-priority tasks. | The chatbot will reduce turnaround time for responding to inquiries, improve consistency of responses, and allow staff to focus on other priorities. Evaluation will assess response accuracy, completeness, and revision needs, as well as consistency across similar inquiries. | The AI chatbot generates an initial draft response to inquiries using content from CDCs public-facing webpages. Each draft is reviewed and cleared through the existing CDC process before being sent. | 24/10/2026 | b) Developed in-house | No | The AI chatbot generates an initial draft response to inquiries using content from CDCs public-facing webpages. Each draft is reviewed and cleared through the existing CDC process before being sent. | Content from CDCs public-facing webpages | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Distiller SR: AI to screen research articles for Community Guide reviews | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Screening research articles for systematic reviews is time-consuming. The AI tool will automate and speed up the process, supporting the Community Preventive Services Task Force in making timely recommendations. | The AI tool may increase the speed of conducting systematic reviews, expediting the evaluation of public health programs for CPSTF recommendations. | The AI system uses machine learning to efficiently screen and identify research articles relevant to evaluating the effectiveness of interventions. | 23/05/2026 | b) Developed in-house | No | The AI system uses machine learning to efficiently screen and identify research articles relevant to evaluating the effectiveness of interventions. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Evaluating Generative AI for polio containment. | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Respiratory Virus Response (RVR) Data Analysis Concept | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | Summarizing subject matter expert (SME) interpretations and knowledge during a public health response is time-consuming. The AI aims to improve efficiency in knowledge dissemination by generating key takeaways from SME-provided information. | The AI solution will improve the efficiency of disseminating essential information to the public, enable quicker SME review and clearance, and enhance understanding of AI limitations (e.g., bias, hallucinations) for future public health applications. | The AI generates summaries of bulleted SME information, producing key takeaways for review and clearance. The system will be evaluated for its ability to contextualize responses and improve tone and style in future iterations. | 24/06/2026 | b) Developed in-house | No | The AI generates summaries of bulleted SME information, producing key takeaways for review and clearance. The system will be evaluated for its ability to contextualize responses and improve tone and style in future iterations. | Bulleted SME information and uncleared data from the Respiratory Virus Response (RVR) | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | School LLM initial abstract review process | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Manual review and categorization of thousands of research abstracts related to school readiness science is time-consuming. The AI enables efficient extraction and categorization of themes, reducing human effort and time. | The AI allows for efficient categorization of thousands of abstracts in a much shorter time frame, with less human effort, and presents results in a user-friendly dashboard for health scientists to use in research and decision-making. | The AI uses an LLM to extract data from abstract reviews and categorize relevant themes and topics into a user-friendly dashboard, enabling users to pull resources from 20122022 for specific school closure outcomes or themes. | 23/08/2026 | c) Developed with both contracting and in-house resources | No | The AI uses an LLM to extract data from abstract reviews and categorize relevant themes and topics into a user-friendly dashboard, enabling users to pull resources from 20122022 for specific school closure outcomes or themes. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | CDC Chatbot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | This tool is a general purpose assistant for CDC staff powered by Large Language Models. Staff can upload documents, summarize information, extract information, create content, develop software code, or general tasks to support operational efficiency. | Staff have used this tool to save an estimated 40,000 hours across various domains including efficiency gains with content creation, software development, and other back-of-the-house tasks within CDC. This has provided a greater than 500% ROI for the agency. | The system generates responses to staff questions in a general purpose manner including related to questions of uploaded documents. Staff may use the generated text in any manner they deem as appropriate. | 24/02/2026 | c) Developed with both contracting and in-house resources | Yes | The system generates responses to staff questions in a general purpose manner including related to questions of uploaded documents. Staff may use the generated text in any manner they deem as appropriate. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | EDAV Virtual Assistant - Eva ( Microbot Service) - Bot as a Service (BaaS) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | This tools serves as a helpdesk to support CDC staff in finding relevant information related to the Enterprise Data, Analytics, and Visualization platform available across existing documentation. | Increased access for staff to documentation and decreased time spent searching for relevant information. This includes a reduction in number of support tickets from staff. | Staff will ask the chatbot questions related to various platform documentation. The AI use cases supplies back information including references and citations to the existing documentation for staff to go to. | 25/08/2026 | c) Developed with both contracting and in-house resources | Yes | Staff will ask the chatbot questions related to various platform documentation. The AI use cases supplies back information including references and citations to the existing documentation for staff to go to. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | EDAV Azure DataFactory - Pipeline failure Analysis | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | There are over 4,000 pipelines which provide logs of their status. Manual review is highly time-consuming and error prone due to the scale of logs. | Benefits include improved quality of summaries of logs and additional information that is available faster than traditional manual processes. | Logs of the Data Factory pipelines and analysis of the various information from the data pipelines including recommended next steps to support staff in solving potential challenges maintaining these data pipelines. | 25/07/2026 | c) Developed with both contracting and in-house resources | Yes | Logs of the Data Factory pipelines and analysis of the various information from the data pipelines including recommended next steps to support staff in solving potential challenges maintaining these data pipelines. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | 1CDP (SEDRIC) AIP for Advanced Foodborne Outbreak Investigation (AI Summarization and Receipt Reading) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Emergency Management | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This AI solution impacts the process of investigating foodborne disease outbreaks. These foodborne outbreaks require cooperative efforts from CDC staff, FDA, USDA, and local agencies and the AI system is used through a centralized data platform System for Enteric Disease Response, Investigation, and Coordination (also known as SEDRIC). For more information on SEDRIC, please go to our website: https://www.cdc.gov/foodborne-outbreaks/php/foodsafety/tools/index.html | SEDRIC's AIP use case provides CDC epidemiologists the ability to accelerate their investigations of multi-state foodborne disease outbreaks by more effectively leveraging data available in a rich data source, such as receipts from grocery stores, that otherwise requires extensive time and human effort to parse through. In addition, this workflow would free up epidemiologists' time and, potentially, increase the frequency with which both CDC and STLT partners could utilize shopper card, receipts, and free text responses to support investigations. There are two main expected benefits from this use case. The first addresses manually entering receipt information, shopper card information, or other such free text field is traditionally error prone and time intensive for a variety of information. This AI system provides a human-in-the-loop opportunity to review and update data entry points while reducing time spent by staff to gain these insights. Having a set structured output as well increased standardization of this information and eases reporting in situations requiring cooperation from multiple organizations. The second benefit revolves around using the summarization capability, the extensive process of mapping common names of different food items is done automatically, greatly reducing the human labor time to generate dashboards of information regarding current foodborne investigations to serve as a decision point to aid in outbreak response. | The Artificial Intelligence Platform (AIP) available within SEDRIC provides CDC epidemiologists the power to accelerate their investigations of multi-state foodborne disease outbreaks. It can extract structured data from grocery receipts, shopper card records, and free-text responses in order to catalog the food items purchased by affected patients. It can also map those items to SEDRIC-defined vehicles which categorize the items and highlight commonalities across patients, helping to pinpoint potential outbreak vehicles. AIP can summarize these results to provide insights from information pulled from shopper receipts. Given ingredients can be found in multiple food products, and some ingredients such as herbs like coriander/cilantro may go by multiple names or be reported in multiple languages, this summarization tool provides a faster way to gather summary information from receipts on different food items which may be part of a foodborne investigation. | 23/10/2026 | c) Developed with both contracting and in-house resources | Palantir Technologies | Yes | The Artificial Intelligence Platform (AIP) available within SEDRIC provides CDC epidemiologists the power to accelerate their investigations of multi-state foodborne disease outbreaks. It can extract structured data from grocery receipts, shopper card records, and free-text responses in order to catalog the food items purchased by affected patients. It can also map those items to SEDRIC-defined vehicles which categorize the items and highlight commonalities across patients, helping to pinpoint potential outbreak vehicles. AIP can summarize these results to provide insights from information pulled from shopper receipts. Given ingredients can be found in multiple food products, and some ingredients such as herbs like coriander/cilantro may go by multiple names or be reported in multiple languages, this summarization tool provides a faster way to gather summary information from receipts on different food items which may be part of a foodborne investigation. | Data are used in outbreak/response scenarios, such as foodborne illness outbreak response. Data used is dependent on the situation and outbreak, and may be owned by CDC, FDA, USDA, State Health Departments, Tribal Health Departments, Local Health Departments, Territorial Health Departments, or other entities. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Genetic distance computation method for comparing complex multi-locus parasite (Cyclospora) genotypes | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Investigating the similarity of infections for epidemiologic investigations of cyclosporiasis outbreaks. The method enables clustering and comparison of complex genotypes, which are too large and complex for traditional methods, to identify related infections during outbreak tracking. | The AI enables analysis of massive genotype datasets, facilitating rapid and accurate identification of infection clusters. This supports epidemiologic investigations and traceback for cyclosporiasis and other parasites, improving outbreak response and public health interventions. | The system outputs genetic distance matrices and clusters of closely related infections, based on comparisons of haplotypes from clinical samples. These outputs are used to complement epidemiologic investigations and traceback activities. | 19/09/2026 | b) Developed in-house | Yes | The system outputs genetic distance matrices and clusters of closely related infections, based on comparisons of haplotypes from clinical samples. These outputs are used to complement epidemiologic investigations and traceback activities. | Cyclospora sequence data generated by CDC, State Public Health Labs, and the Public Health Agency of Canada, following a CDC-developed protocol for 8 genotyping markers. All CDC and State Public Health Labs sequence data are publicly available via NCBI (see below). | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | MedCoder - Coding literal text cause of death information reported on death certificates to ICD-10 | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Automating the coding of literal text causes of death from death certificates to ICD-10 codes, improving accuracy, efficiency, and timeliness of mortality data for public health surveillance. | MedCoder increased the percentage of deaths that can be automatically and accurately coded from 70-75% to over 85%, resulting in substantial cost savings (hundreds of thousands of dollars) and significantly enhancing the timeliness of data for urgent public health concerns (e.g., COVID, drug overdose deaths), enabling near real-time surveillance. | MedCoder outputs ICD-10 cause of death codes from literal text on death certificates. It also flags complex or frequently miscoded cases for manual review. The system uses NLP to cleanse and standardize input text before coding. | 22/06/2026 | b) Developed in-house | Yes | MedCoder outputs ICD-10 cause of death codes from literal text on death certificates. It also flags complex or frequently miscoded cases for manual review. The system uses NLP to cleanse and standardize input text before coding. | Death certificate literal text data, including cause of death statements, and associated demographic information such as sex. Documentation for model training and evaluation data is widely available. | No | b) Sex | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | NCIRD SmartFind ChatBots - Public and Internal | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Improving efficiency and effectiveness of internal partner mailbox email management and knowledge base maintenance for program staff, and previously, providing public-facing answers to FAQs. | The internal Knowledge-Bots SharePoint component helps program staff manage partner emails more efficiently and effectively, enabling shared knowledge base use across mailbox managers. The public-facing chatbots previously provided timely, agency-cleared answers to public and partner questions, supporting rapid information dissemination during the COVID-19 pandemic. | Conversational ChatBots that analyze free text questions and provide agency-cleared answers that best match the question. The system also flags complex or unanswerable queries for manual review. | 24/12/2026 | c) Developed with both contracting and in-house resources | Yes | Conversational ChatBots that analyze free text questions and provide agency-cleared answers that best match the question. The system also flags complex or unanswerable queries for manual review. | Public-facing FAQs and other agency-reviewed information accessible publicly were used as the knowledge base for the public-facing chatbots. Internal chatbot uses internal knowledge base content. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | NIOSH Industry and Occupation Computerized Coding System (NIOCCS) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Replacing manual coding of industry and occupation text with automated coding to standardized codes, reducing cost and increasing speed, accuracy, and consistency for research and analysis. | Reduces the high cost of manual coding, promotes increased coding speed, accuracy, and consistency, and enables more efficient use of industry and occupation data for research and analysis. | Standardized industry and occupation codes generated from free-text input, suitable for research and analysis. | 24/01/2026 | b) Developed in-house | Yes | Standardized industry and occupation codes generated from free-text input, suitable for research and analysis. | STLT's death record data (received via NCHS) and BRFSS survey data. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Nowcasting Injury Trends | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Providing real-time estimates of injury and death trends to enhance situational awareness and expedite surveillance and research activities, especially when gold standard data are delayed. | Enables timelier identification and investigation of emerging injury trends, improving the speed and effectiveness of public health surveillance and response when gold standard data are not yet available. | An internal-facing, interactive dashboard that provides week-to-week national nowcasts of injury death trends, using multiple traditional and non-traditional datasets and a multi-stage machine learning pipeline. | 22/01/2026 | b) Developed in-house | Yes | An internal-facing, interactive dashboard that provides week-to-week national nowcasts of injury death trends, using multiple traditional and non-traditional datasets and a multi-stage machine learning pipeline. | Emergency Department data from the National Syndromic Surveillance Program. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Risk Assessment Module (RAM) for the National Diabetes Prevention Program (National DPP) Operations Center. | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | To assist in determining if an organization participating in the National DPP is at risk of improperly starting, going inactive, or not achieving necessary goals for continued participation in CDCs Diabetes Prevention Recognition Program. | The RAM module helps program managers synthesize large amounts of organization-level data to make informed decisions on assisting organizations, leading to increased program participation and improved health outcomes. | The RAM is a reporting tool that ingests organization-level data (including participant enrollment, demographics, and risk factors) to generate a ranked list of organizations at highest risk of failing to meet program objectives. Outputs are currently restricted to CDC associates, with plans for future access by State Quality Specialist users. | 24/08/2026 | c) Developed with both contracting and in-house resources | Yes | The RAM is a reporting tool that ingests organization-level data (including participant enrollment, demographics, and risk factors) to generate a ranked list of organizations at highest risk of failing to meet program objectives. Outputs are currently restricted to CDC associates, with plans for future access by State Quality Specialist users. | Historical data from organizations 6-month submissions of participant attendance in Lifestyle change classes, sourced from the DDT DPRP Portal. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Semi-Automated Nonresponse Detection for Surveys (SANDS) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Manual review of open-ended survey responses is labor-intensive and cost-prohibitive at scale. SANDS automates the detection of nonresponses in survey data, reducing the burden on researchers and improving data quality. | SANDS significantly reduces manual curation time for open-ended survey responses by providing automated scoring and flagging of nonresponses. This enables faster compilation of high-quality datasets for qualitative research and streamlines the review process for researchers. | The system outputs scores for open-ended survey responses, identifying likely nonresponses and flagging responses that require further review. This helps improve survey data quality and informs questionnaire design. | 22/09/2026 | b) Developed in-house | No | The system outputs scores for open-ended survey responses, identifying likely nonresponses and flagging responses that require further review. This helps improve survey data quality and informs questionnaire design. | 3,000 labeled open-ended responses to web probes on questions relating to the COVID-19 pandemic, gathered from the Research and Development Survey (RANDS) conducted by the Division of Research and Methodology at the National Center for Health Statistics. | Yes | k) None of the above | Yes | https://huggingface.co/NCHS/SANDS | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Sequential Coverage Algorithm (SCA) and partial Expectation-Maximization (EM) estimation in Record Linkage | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | To improve the accuracy and efficiency of record linkage in CDCs National Center for Health Statistics (NCHS) Data Linkage Program, particularly for large datasets, by automating the development and selection of blocking groups and reducing manual effort. | Increased accuracy and efficiency in data linkage Automation reduces manual effort and increases scalability Machine learning algorithms adapt and improve over time, refining linkage processes Enables researchers to better examine factors influencing disability, chronic disease, health care utilization, morbidity, and mortality | Development of joining methods (blocking groups) for large datasets Estimation of the proportion of matched pairs within each block Improved linkage accuracy and efficiency | 20/08/2026 | c) Developed with both contracting and in-house resources | Yes | Development of joining methods (blocking groups) for large datasets Estimation of the proportion of matched pairs within each block Improved linkage accuracy and efficiency | Data from the National Hospital Care Survey, the National Health and Nutrition Examination Survey, the National Health Interview Survey, and linked administrative data. | Yes | b) Sex | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | TowerScout: Automated cooling tower detection from aerial imagery for Legionnaires' Disease outbreak investigation | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Rapid identification of cooling towers that could potentially be spreading Legionella bacteria during outbreaks, enabling faster and more efficient outbreak response. | Detects cooling towers approximately 600 times faster than manual searches. Enables more efficient and timely response during Legionella outbreaks. Improves public health response and outbreak containment. | Detection and classification of cooling towers within aerial imagery. | 21/05/2026 | c) Developed with both contracting and in-house resources | Yes | Detection and classification of cooling towers within aerial imagery. | Aerial imagery data used for object detection and image classification. | No | k) None of the above | Yes | https://github.com/TowerScout/TowerScout | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Assessing Large Language Models for Synthetic Survey Data Generation | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Survey data de-identification is crucial for the NCHS to maximize data utility while protecting privacy, but determining and applying modern best practices requires further research. NCHS conducts national surveys and releases microdata (data containing information about individuals) for public use. To protect survey participants confidentiality, statistical disclosure limitation techniques have been used to de-identify data, but these methods have drawbacks of losing statistical properties of the original data and thus limiting useful analyses. Additionally, these methods are not designed for very large data or text data. Use of synthetic data may offer another option. The goal of synthetic data is to preserve essential statistical features and variable relationships of the original data such that statistical inference based on the synthetic data is close to that of the original data. Large language models (LLMs) may be able to address limitations of statistical methods for synthetic data creation, especially for natural language data. We aim to advance knowledge of this application of LLMs to enable staff to select the optimal tools for synthetic data generation. | Current statistical methods for synthetic data generation have drawbacks such as difficulty handling very large datasets, steep learning curve for people with less statistics or coding background, and inability to generate natural language data. Thus, if LLMs are evaluated to be successful at synthetic survey data generation, this alternative method would enable more data synthesis at scale, more data synthesis that can be conducted by staff with various levels of statistics backgrounds, and the first ever release of synthetic survey text data. | Continuous, categorical, and free text data that matches properties of original survey data. | Continuous, categorical, and free text data that matches properties of original survey data. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Development of in-silico genomic and patient datasets using generative ML algorithms | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | GenAIMeta: Generative AI CDC Metadata Query Application | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | Metadata plays a crucial role in enhancing public understanding and usage of CDC data. Usable metadata are essential not only for making data easy to find, understand, and use on data.cdc.gov but also for synchronizing with other federal catalogs. Metadata on data.cdc.gov spans 1,056 datasets with ~20 fields each, syncing nightly with federal catalogs. Manual validation, normalization and monitoring of this volume, and the inconsistent quality and completeness of those fields creates bottlenecks for data discovery, governance and downstream analytics. The aim is to leverage EDAVs Azure OpenAI-powered models to automate metadata validation, standardization and monitoring at scale, replacing error-prone manual checks with real-time, AI-driven oversight. | Objective: Phase 1- Automate metadata validation and monitoring on data.cdc.gov using EDAVs Azure OpenAI API Phase 2 - Make useful AI agents that are user centric and make application of the tool broader with tested agent efficacy Evaluation: Approach: Asked both general and domain-trained models the same set of real-world questions. Benchmarked responses on four metrics: Accuracy: Match to ideal answers Relevance: Alignment with user needs Clarity: Readability and actionability Completeness: Coverage of all aspects of the question Results: Domain-trained models outscored general models on every metric. Trained models delivered more precise, context-aware, and fully-formed answers. General models tended toward vague or overly broad responses. Conclusion: Targeted, domain-specific training significantly boosts an LLMs ability to meet specialized user requirements. Key Benefits: Actionable insights for better decision-making during time-sensitive scenarios. Optimized resource allocation for improved efficiency. Enhanced trust in decision-making frameworks through consistent performance. | Phase -1 Implementation ? EDAVs Azure AI Infrastructure Ingest and preprocess data from data.cdc.gov Extract vector embeddings for model training Build and fine-tune LLM and ML models focused on metadata usage and quality monitoring ? Monitoring Dashboard Connects directly to Azure AI outputs Provides real-time data-quality checks and metadata health metrics Features interactive interfaces for key metrics and insights Phase-2 Implementation ? Domain-Specific Model Training Built a targeted dataset of real-world questions with paired ideal answers Fine-tuned and evaluated models against accuracy, relevance, clarity and completeness benchmarks ? Multi-User, Multi-Agent Framework Deployed specialized agents for distinct roles Enabled simultaneous support for diverse users including data-quality managers, data scientists, epidemiologists, etc. ensuring scalable, task-focused collaboration | 25/01/2026 | c) Developed with both contracting and in-house resources | Yes | Phase -1 Implementation ? EDAVs Azure AI Infrastructure Ingest and preprocess data from data.cdc.gov Extract vector embeddings for model training Build and fine-tune LLM and ML models focused on metadata usage and quality monitoring ? Monitoring Dashboard Connects directly to Azure AI outputs Provides real-time data-quality checks and metadata health metrics Features interactive interfaces for key metrics and insights Phase-2 Implementation ? Domain-Specific Model Training Built a targeted dataset of real-world questions with paired ideal answers Fine-tuned and evaluated models against accuracy, relevance, clarity and completeness benchmarks ? Multi-User, Multi-Agent Framework Deployed specialized agents for distinct roles Enabled simultaneous support for diverse users including data-quality managers, data scientists, epidemiologists, etc. ensuring scalable, task-focused collaboration | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Detecting, evaluating, and redacting PII in NAMCS HC Component | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | How effective are open-source PII detection models in identifying and redacting PII? The National Center for Health Statistics (NCHS) Division of Health Care Statistics has collected millions of health records with laboratory results from encounters at health centers via the National Ambulatory Medical Care Survey (NAMCS), Health Center (HC) Component. Due to inadvertent errors during data entry and processing, some records contain identifiers (e.g. names, locations, etc.) in non-PII fields. Due to the PII, certain fields cannot be made available for restricted or public use, but reviewing millions of records for PII is not practical. | If the process is feasible, it will significantly increase the healthcare lab data available to researchers for analysis. Additionally, the process could be applied to additional tables and years of data, increasing overall data availability. | A semi-automated process to conduct a quality control review of health data records, including potential PII records flagged for manual review. | A semi-automated process to conduct a quality control review of health data records, including potential PII records flagged for manual review. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Retrieval Augmented Generation (RAG) with Q-Bank | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Using generative AI to gain insight of older adult falls | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Nowcasting Burden and Infection Trends for Seasonal and Epidemic Pathogens | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Emergency Management | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Improve real-time estimates of disease burden and infection trends for better situational awareness for planning and decision-making at the national, state, and local level. Traditional AI/ML models (e.g. time series models) are mainly used as baselines against which to test and improve more sophisticated modeling methods. | Providing timely, accurate, and actionable information on current and near-future disease risk and effort required for control to government officials and the public. | Current outputs include weekly state-level estimates of the time-varying reproductive number (Rt) a measure of epidemic trajectory and indicator of the level of effort needed to bring an epidemic under control for COVID-19 and influenza (public-facing) and RSV (internal to CDC at this time), and weekly nowcasts of hospital admissions within the Respiratory Virus Hospitalization Surveillance Network (RESP-NET; internal to CDC at this time). | 23/11/2026 | c) Developed with both contracting and in-house resources | Yes | Current outputs include weekly state-level estimates of the time-varying reproductive number (Rt) a measure of epidemic trajectory and indicator of the level of effort needed to bring an epidemic under control for COVID-19 and influenza (public-facing) and RSV (internal to CDC at this time), and weekly nowcasts of hospital admissions within the Respiratory Virus Hospitalization Surveillance Network (RESP-NET; internal to CDC at this time). | Internal and publicly available hospital admissions data collected through the Respiratory Virus Hospitalization Surveillance Network (RESP-NET), internal and publicly available emergency department visit data collected through the National Syndromic Surveillance Program (NSSP) | No | Race/Ethnicity; Sex; Age | Yes | o Current methods used by CFA: https://github.com/epiforecasts/EpiNow2 o Current CFA deployment pipeline: https://github.com/CDCgov/cfa-epinow2-pipeline o Methods in development by CFA: https://github.com/CDCgov/cfa-gam-rt | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Automating the distribution of CDC-State Department cables using AI models in the Global Health Center | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Agentic AI | The output from the AI system is the automated and timely distribution of CDC-State Department cables. It delivers accurate, formatted messages directly to the intended recipients within seconds, ensuring fast and reliable communication without manual intervention. | This AI use case reduces cable distribution time from 24 hours to 30 seconds, greatly speeding up communication. It saves staff time, improves accuracy, and helps the CDC respond faster to health threats, supporting its mission to protect public health effectively. | The output from the AI system is the automated and timely distribution of CDC-State Department cables. It delivers accurate, formatted messages directly to the intended recipients within seconds, ensuring fast and reliable communication without manual intervention. | 25/05/2026 | c) Developed with both contracting and in-house resources | Yes | The output from the AI system is the automated and timely distribution of CDC-State Department cables. It delivers accurate, formatted messages directly to the intended recipients within seconds, ensuring fast and reliable communication without manual intervention. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | DHDSP NOFO Technical Assistance Chatbot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The Division for Heart Disease and Stroke Prevention (DHDSP) experienced numerous challenges in coordinating with grantees/recipients to support questions related to their programs. These challenges often resulted in inefficiencies such as inconsistent communication, limited accessibility to data, potential inaccuracies in responses, and delayed responses. To address these issues, the TA Chatbot was developed to reduce the administrative burden on staff related to their assigned Technical Assistance (TA) case load. | The chatbot is trained on hundreds of DHDSP NOFO specific and HHS policy documents that PDSB Project Officers, PDSB Data Team, and AREB Evaluation TA Providers would have to search through to find answers to recipient questions. Use of this chatbot will save hundreds of hours of staff time so they can focus on other tasks to support DHDSP-funded recipients. | The Technical Assistance chatbot incorporates a Large Language Model (LLM) AI to provide quick, accurate, plain-language answers to questions on grants policy and program processes, protocols, and requirements. | 24/08/2026 | c) Developed with both contracting and in-house resources | Yes | The Technical Assistance chatbot incorporates a Large Language Model (LLM) AI to provide quick, accurate, plain-language answers to questions on grants policy and program processes, protocols, and requirements. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Rapid Detection of Acute Releases of Toxic Substances (RaDARTS) | a) Pre-deployment The use case is in a development or acquisition status. | Emergency Management | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Reporting on Acute Releases of Toxic substances involves review of new media sources and is a highly labor intensive process which may involve missed content. The goal is to rapidly ingest, categorize, summarize, and store data from news media sources to inform situational awareness and surveillance of Acute Releases of Toxic Substances. | This project aligns with the NCEH/ATSDR Strategic Framework to: Monitor and effectively respond to environmental public health hazards, emergencies, and threats that affect domestic and international health security and build appropriate capacity within state, local, territorial and tribal communities. This project will significantly reduce burden on staff, the time it takes to review data, improve the timeliness of information, and all in a cost-effective manner. | Data points such as the number of people injured, the number of fatalities, and any public health actions associated with the events(e.g. shelter in place, evacuation) | Data points such as the number of people injured, the number of fatalities, and any public health actions associated with the events(e.g. shelter in place, evacuation) | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | DLS AI Assistant tool | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Regulatory laboratory compliance has become increasingly complex, requiring scientists, lab personnel, and managers to understand laboratory quality regulations like the Clinical Laboratory Improvement Amendments (CLIA). The DLS AI Assistant is a tool designed to help scientists and lab workers by providing relevant guidance, such as the internal DLS Policy and Procedures Manual (DLS PPM) and external CLIA and International Organization for Standardization (ISO) regulations for laboratory quality. It offers personalized responses based on the user's level of expertise. In the final stage of the project, we plan to add a feature that will automatically evaluate Standard Operating Procedures (SOPs) to check if they comply with all relevant documents. However, the DLS AI Assistant is meant to assist and does not dictate compliance. | We estimate that 5-10% of all time spent within the DLS is focused on compliance efforts, such as documentation, training sessions, and audit preparations and participation. The DLS AI Assistant tool supports these quality improvement efforts by helping staff understand and follow laboratory regulations more efficiently. The goal is to streamline the review of compliance efforts without compromising quality. Additional benefits include increased harmonization and reduced time spent on evaluating edge cases. For the CDC, this tool adds another layer of checks and balances and enhances knowledge sharing, ultimately leading to better and more accurate laboratory methods. | Text-based information includes regulatory compliance details from both internal sources, such as the DLS Policy and Procedures Manual (DLS PPM), and external sources, like CLIA and International Organization for Standardization (ISO) regulations for laboratory quality. | 25/05/2026 | c) Developed with both contracting and in-house resources | Yes | Text-based information includes regulatory compliance details from both internal sources, such as the DLS Policy and Procedures Manual (DLS PPM), and external sources, like CLIA and International Organization for Standardization (ISO) regulations for laboratory quality. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | PDF information extraction for 889 Forms | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Computer Vision | Document processing and extraction of text from required 889 compliance forms (PDFs and/or image files) into a searchable cloud-based database. Eliminating manual data entry and providing a more transparent log of compliance related information in a repository for users to check, verify, and review. | The expected benefits include greater transparency regarding vendor compliance. Users have an easier way to look up and view vendor information across the center to eliminate duplicate request for vendor compliance. Users and staff will save time and potentially have less human error compared to manual data entry. | The 889 Document Processor is an Optical Character Recognition (OCR) model built using Microsoft AI Builder. The 889 Form Repository applies a Power Automate Flow, which triggers when a user uploads an 889 Form into the repository (SharePoint Library), applying the 889 Document Processor to read and recognize the text on the form (both print or handwritten) and extracts the text into a formatted SharePoint list. | 25/04/2026 | b) Developed in-house | No | The 889 Document Processor is an Optical Character Recognition (OCR) model built using Microsoft AI Builder. The 889 Form Repository applies a Power Automate Flow, which triggers when a user uploads an 889 Form into the repository (SharePoint Library), applying the 889 Document Processor to read and recognize the text on the form (both print or handwritten) and extracts the text into a formatted SharePoint list. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | A-Z Index of Tox Profiles | Toxicological Profiles | ATSDR https://www.atsdr.cdc.gov/toxicological-profiles/glossary/index.html | ATSDR Toxicological Assistant chatbot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Generative AI | The ATSDR Toxicological Assistant chatbot via the CDC Chatbot has access to query 180 comprehensive toxicological profiles (A-Z Index of Tox Profiles | Toxicological Profiles | ATSDR). The chatbot can assist users in accessing toxicological data, answering questions about specific chemicals, providing information on exposure pathways, and offering guidance on health assessments related to environmental contaminants. This tool can significantly reduce research time, ultimately enhancing efficiency in accessing critical toxicological information. | This tool significantly reduce research time, ultimately enhancing efficiency in accessing critical toxicological information to effectively and rapid response to general public enquire relative to chemical exposure through to ATSDR Info. | Substance Lookup: Summarizes health risks, exposure pathways, and toxicological data from a pre-built library of over 180 ATSDR toxicological profiles. Interactive Q&A: Generates answers to user questions based solely on information within the extensive toxicological profiles, ensuring accuracy and reliability. Navigation Support: Effectively guides users to specific chapters, tables, pages, and references within lengthy toxicological documents. Comparative Analysis: Enables users to compare the health effects of different substances, facilitating comprehensive environmental research and exposure assessments. Document Generation: Assists in creating documents and reports tailored to different reading levels, supporting health consultations and public communication. | 25/04/2026 | c) Developed with both contracting and in-house resources | Yes | Substance Lookup: Summarizes health risks, exposure pathways, and toxicological data from a pre-built library of over 180 ATSDR toxicological profiles. Interactive Q&A: Generates answers to user questions based solely on information within the extensive toxicological profiles, ensuring accuracy and reliability. Navigation Support: Effectively guides users to specific chapters, tables, pages, and references within lengthy toxicological documents. Comparative Analysis: Enables users to compare the health effects of different substances, facilitating comprehensive environmental research and exposure assessments. Document Generation: Assists in creating documents and reports tailored to different reading levels, supporting health consultations and public communication. | A-Z Index of Tox Profiles | Toxicological Profiles | ATSDR https://www.atsdr.cdc.gov/toxicological-profiles/glossary/index.html | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Machine Learning with Premier Healthcare Data to inform predictive modeling of antibiotic use | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Surveillance data, including the Emerging Infections Program (EIP) and the National Healthcare Safety Network (NHSN) can be linked to and other inpatient data to better characterize risk factors and outcomes of important healthcare associated infections (HAI) and antimicrobial resistance (AR). However, discharge data often lacks detailed information about inpatient antibiotic use. | This project uses the Premier Healthcare Database (PHD), an electronic health database, to predict inpatient antibiotic use and length of therapy using data readily available in claims and other electronic health record databases. This adds additional potential sources of information to support insights. | These models will allow us to fill in gaps in antibiotic use information in Medicare claims and discharge datasets to better leverage EIP and NHSN data to better understand how cumulative antibiotic use may impact patients risk for HAIs and AR infections. | 25/04/2026 | b) Developed in-house | Yes | These models will allow us to fill in gaps in antibiotic use information in Medicare claims and discharge datasets to better leverage EIP and NHSN data to better understand how cumulative antibiotic use may impact patients risk for HAIs and AR infections. | No | Race/Ethnicity; Sex; Age | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | https://www.nature.com/articles/s41598-024-76089-3 | Machine Learning Techniques for Early Detection and Situational Awareness of Rabies Outbreaks | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Rabies is enzootic in wildlife and modern-day surveillance techniques are too weak to fully capture the geographic extent of outbreaks nor early outbreak detection. Our ML algorithm uses public health surveillance data to "fill in the gaps" inherent in wildlife disease surveillance programs to accurately and rapidly detect outbreaks and deploy public health resources. | This model is currently being using in domestic and international settings for early rabies outbreak detection. This information is shared with relevant public health authorities to initiate preventive actions which often include: public awareness campaigns/social media, deployment of vaccines for animals and people, deployment of testing reagents to bolster surveillance. | Disease trend for real-time monitoring of rabies, Probabilities of disease occurrence over time and space. Spatiotemporal clustering with tiered risk classification differentiates stable circulation from emerging rabies transmission, improving situational awareness and guiding seasonally targeted surveillance and interventions, underscoring the need for real-time data sharing to strengthen outbreak response. | 25/01/2026 | b) Developed in-house | Yes | Disease trend for real-time monitoring of rabies, Probabilities of disease occurrence over time and space. Spatiotemporal clustering with tiered risk classification differentiates stable circulation from emerging rabies transmission, improving situational awareness and guiding seasonally targeted surveillance and interventions, underscoring the need for real-time data sharing to strengthen outbreak response. | https://www.nature.com/articles/s41598-024-76089-3 | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | AI-Powered web scanner for digital surveillance of rabies-related news articles | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Agentic AI | US travelers to international destinations are often exposed to rabies, and in some cases have died upon return to the US. Accurate and early identification of rabies outbreaks can help inform US travelers pre-travel healthcare and vaccine decisions. Unfortunately, surveillance for and transparency of rabies outbreaks in international settings is unreliable and rarely reported through official government channels. | This scanner offers a low-resource method of scanning media for evidence of rabies outbreaks that jeopardize US traveler's health, faster and more reliably than relying on formal notifications or announcement from foreign governments. | Daily automated compilation of news reports with potential outbreaks, high-risk rabies exposures, species involvement, etc. | 25/05/2026 | b) Developed in-house | No | Daily automated compilation of news reports with potential outbreaks, high-risk rabies exposures, species involvement, etc. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Pathogen strain characterization from mixed strain samples | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We use the DNA sequences at specific places in pathogen genomes to create a "DNA fingerprint" that allows us to link cases of diarrheal illness and identify potential foodborne outbreaks. When a single patient has more than one strain of the same pathogen (e.g. two pathogenic E. coli), the pieces of the DNA fingerprint get mixed together in the sample and make the data unusable for outbreak surveillance. Our ML-based method is intended to sort the pieces of the DNA fingerprints into separate strains and make this data usable for foodborne outbreak surveillance. | Surveillance for diarrheal foodborne outbreaks currently depends upon the availability of bacterial isolates obtained from patient stools to obtain the pathogen "genomic fingerprints" identifying pathogen strains. The availability of these isolates for fingerprinting is declining nationwide due to technological advancements that improve patient care. To maintain our ability to detect outbreaks without isolates, CDC is developing laboratory methods that obtain the pathogen genomic fingerprint directly from the patient stool specimen. However, patient stools frequently contain more than one strain of pathogen, so the ability to deploy these methods and maintain the sensitivity of foodborne outbreak surveillance is dependent upon development of this ML-based method to sort pathogen genomic fingerprint pieces from stool. Based on FoodNet data, we estimate that failure to implement these methods could lead to the loss of up to 75% of the samples currently captured by surveillance for some pathogens. Fewer surveillance samples will mean fewer outbreaks are detected and it will take longer to detect them, resulting in more people affected. For a sense of the scale of the challenge, NORS recorded ~300 outbreaks of Salmonella and E. coli in 2023 that were detected as a result of isolate-based surveillance. Economic impact evaluations have estimated that PulseNet surveillance alone prevents ~270,000 cases of foodborne illness in the US annually for a savings of at least $500,000,000 to the economy. | Our ML-based method 1) predicts the number of strains of a pathogen found in a single sample, 2) reports the DNA fingerprint defining each strain, and 3) gives the likelihood that two samples contain the same pathogen strain. | Our ML-based method 1) predicts the number of strains of a pathogen found in a single sample, 2) reports the DNA fingerprint defining each strain, and 3) gives the likelihood that two samples contain the same pathogen strain. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | AMD-Platform Data Harmonization | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Current Metadata may not match existing values, decreasing metadata quality. The harmonization of submitted deviant metadata to best match our current values to ensure accurate metadata is the goal, with an example being Mtb is converted to Mycobacterium tuberculosis. | Standardize datasets for analysis, increased quality of metadata and reduced processing time. Developed to reduce time for staff implementing Metadata submissions. | Updated dataset with standardized Metadata with improved data quality. | Updated dataset with standardized Metadata with improved data quality. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Text embedding analysis tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Reviewing large sets of documents can take hours to identify similar clusters or analysis. The tool helps non-technical staff explore large, text-based datasets by generating clusters of text and identifying similar documents. | One benefit is being able to get a quick-but-principled analytic overview of a (potentially very large) text corpus's semantic content. This may yield time savings for tasks like responding to public inquiries and doing qualitative analyses of unstructured datasets. | The system generates text embeddings and then understand how their documents cluster in the embedding space. The system has no default output--it's primarily an AI-enabled canvas for drawing the embedding space and helping users explore the space rigorously. Users may, however, choose to export the embeddings, cluster assignments, or modified source datasets for use in other downstream analyses. | 25/01/2026 | b) Developed in-house | Yes | The system generates text embeddings and then understand how their documents cluster in the embedding space. The system has no default output--it's primarily an AI-enabled canvas for drawing the embedding space and helping users explore the space rigorously. Users may, however, choose to export the embeddings, cluster assignments, or modified source datasets for use in other downstream analyses. | No | k) None of the above | Yes | https://github.com/scotthlee/tars | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Agentic RAG Chatbot | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | The creation of accurate, detailed, and factually-grounded draft responses to public inquiries takes a large amount of time and could benefit from assistance from LLMs. Because of how complex the responses to the inquiries can be, LLMs alone did not perform well enough to be useful in production, so we decided to build a more advanced chatbot that uses a small team of agents to refine the inquiries, decide what source data to use for grounding, and generate higher-quality draft responses to the inquiries that staff can use as a starting place for writing their replies. | Time savings, especially when programs are overwhelmed by acute increases in the volume of inquiries they receive after publishing a new guideline or regulation is the primary expected benefit. | Draft responses to an inquiry to serve as a base for staff. The inquiry draft will follow all CDC public health research, data, and recommendations based on the best science currently available. Inquiry responses will go through a separate agency review process prior to release. | Draft responses to an inquiry to serve as a base for staff. The inquiry draft will follow all CDC public health research, data, and recommendations based on the best science currently available. Inquiry responses will go through a separate agency review process prior to release. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Internal Newsletter Formatter Tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This project addresses the challenge of efficiently summarizing and formatting lengthy newsletter documents to provide leadership with clear, concise, and actionable information. | Expected to take less time summarizing and disseminating important information for leadership. | Cleaned and formatted newsletter for internal leadership staff. Output is edited and reviewed by communications staff prior to dissemination. | 25/06/2026 | b) Developed in-house | Yes | Cleaned and formatted newsletter for internal leadership staff. Output is edited and reviewed by communications staff prior to dissemination. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | https://www.ncbi.nlm.nih.gov/datasets/genome/ | Determining if multiple isolates share the same antimicrobial resistant plasmids using short read whole genome sequencing data | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Determining if multiple isolates share the same antimicrobial resistant plasmids using short read whole genome sequencing data. | This will help in the early identification of outbreaks of antimicrobial resistant healthcare associated infections caused by the horizontal transmission of plasmids, which will help to increase the speed at which outbreaks are detected and addressed, potentially decreasing cases saving lives. | A probability estimating whether multiple isolates share the same antimicrobial resistant plasmid. | 23/12/2026 | b) Developed in-house | No | A probability estimating whether multiple isolates share the same antimicrobial resistant plasmid. | https://www.ncbi.nlm.nih.gov/datasets/genome/ | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Analyzing multidrug-resistant organism response data | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Free text data are collected as part of internal tracking of Antimicrobial Resistance. These data are difficult to clean, categorize, and analyze. | Cleaner and more accurate data on multidrug-resistant organism responses. | Categorized free text data related to Antimicrobial Resistance. This will improve the quality and usability of existing data. | Categorized free text data related to Antimicrobial Resistance. This will improve the quality and usability of existing data. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Leveraging AI for the Creation of Synthetic Datasets | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Generative AI | We aim to generate synthetic datasets that will support training, testing, and software development improving the overall development process | The use of an AI-assisted programming environment can significantly reduce the time required to write code for generating synthetic datasets. This efficiency allows us to quickly create the necessary variables and establish the relationships between them. The synthetic data produced through this project will enhance the training experience and streamline the software development process. | The primary outputs from the AI system will include synthetic datasets specifically designed to simulate Healthcare-Associated Infection (HAI) data. | 25/04/2026 | c) Developed with both contracting and in-house resources | No | The primary outputs from the AI system will include synthetic datasets specifically designed to simulate Healthcare-Associated Infection (HAI) data. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | DHP Data Repository and Dashboard Chatbot | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | HP uses data to drive action for preventing and addressing HIV in the United States (U.S.). Historically, DHPs data sources have not been easily accessible internally due to a siloed organizational structure. The DHP Data Repository and Dashboard aims to break down silos by co-locating and visualizing data from across the Division. Currently the repository and dashboard focus on the management and presentation of numeric data. By integrating advanced analytics and natural language processing capabilities into the existing repository we can increase the efficiency, accessibility, and usability of narrative information that is collected in DHP. | Expected benefits include easier and more automated processing of narrative information received in DHP leading to time saved by staff currently processing the information and more efficient and easier use of and access to the information. One of the overarching goals of the DHP repository and dashboard project is to create a mechanism for Division leadership to make informed decisions more easily through easier access to information across work areas. The chatbot is expected to support this by providing a combined source of narrative information with an approachable user interface. | The expected output from this AI focused project is to have a chatbot with a user interface with the model behind it utilizing narrative information specific to DHP. Users will be able to ask questions at the national, regional, or state level and receive answers to questions based on the information in the narrative documents. | The expected output from this AI focused project is to have a chatbot with a user interface with the model behind it utilizing narrative information specific to DHP. Users will be able to ask questions at the national, regional, or state level and receive answers to questions based on the information in the narrative documents. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | HIV Data Quality Score (DQS) Project | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Currently, 60 health departments and 150 community-based organizations submit National HIV Monitoring and Evaluation (NHM&E) program data using a standard online form. However, this data often contains errors that require significant manual cleanup by CDC's HIV data managers. To address this issue, our project aims to create a large language model (LLM)-based data quality score capable of detecting errors in datasets and measuring dataset cleanliness levels. LLMs can also be utilized to automatically fix some detected errors. | This project intends to enable HIV data managers to quickly identify errors, track trends in data quality by site, provide targeted technical assistance (TA), and automate some error corrections. | The outputs include both a List of identified erroneous data fields in a dataset and a Dataset with some errors automatically corrected. This will be available for future evaluation efforts. | The outputs include both a List of identified erroneous data fields in a dataset and a Dataset with some errors automatically corrected. This will be available for future evaluation efforts. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | NCHHSTP Social Media Modernization Strategy: Thought Leadership and Social Listening | a) Pre-deployment The use case is in a development or acquisition status. | Emergency Management | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To improve communication and ensure health messaging on social media platforms is responsive to the needs of the American people. AI-powered social listening tools are increasingly used in public health to analyze public conversations and audience sentiment in real time. We are using AI social listening tools to collect and analyze publicly available social media data. We then meet with a subject matter expert to discuss this information. Insights from the meeting with the SME and KPI data obtained natively on social platforms are used to draft, publish, analyze, and optimize social media content. | We are demonstrating how integrating AI-powered social listening into a communication strategy enhances audience engagement by enabling the creation of targeted content that addresses audience concerns and contributes factual, clinical information to trending conversations. NCHHSTP messages informed by AI-driven social listening data already show a significantly higher engagement rate and impact than those created without this approach. Weekly and monthly social listening reports also indicate an increase in the positive market share of online conversations, particularly during the promotion of NCHHSTPs updated guidelines and public comment periods. Notably, the engagement rate of one of our Centers social media accounts significantly increased from 0.017% in 2023 to 1.92% in 2024, representing a 1,029% increase since this strategy was implemented. | Visual network/cluster maps, volume and trend charts, sentiment analysis, top topics and keywords, influencers and top sources, demographics and audience insights, custom segments and thematic analysis, reports and dashboards. | Visual network/cluster maps, volume and trend charts, sentiment analysis, top topics and keywords, influencers and top sources, demographics and audience insights, custom segments and thematic analysis, reports and dashboards. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Lectora AI Toolkit and Microbuilder authoring tools | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Using LLM to optimize National Health Interview Survey (NHIS) case note information | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The National Health Interview Survey (NHIS) employs Census field representatives (FRs) who use open text fields, referred to as case notes, to document their interactions with households during screening and interview processes. These case notes serve as a valuable resource, offering insights into the nature of these interactions and aiding in the identification of base casesinstances that may reveal significant data quality issues. Currently, the review of case notes is performed manually on a case-by-case basis, which limits opportunities for optimization. The objective of this initiative is to explore how large language models (LLMs) can enhance the efficiency and effectiveness of the case notes review process. | Utilizing large language models (LLMs) for case note reviews provides several advantages, including substantial time and cost savings, improved data quality post data collection, and the creation of more effective training programs. These enhancements not only optimize operational efficiency but also support the goals of public health organization by ensuring that high-quality data is readily available for informed decision-making. | Identifying additional problematic cases not referred by Census; examining all cases from some FR whose case was referred to confirm whether similar issues exist in other cases the FR worked on; identifying themes in the case notes like certain letters/respondent materials that are in use, problematic interview strategies, or respondent confusion with questions. | 25/03/2026 | b) Developed in-house | Yes | Identifying additional problematic cases not referred by Census; examining all cases from some FR whose case was referred to confirm whether similar issues exist in other cases the FR worked on; identifying themes in the case notes like certain letters/respondent materials that are in use, problematic interview strategies, or respondent confusion with questions. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Auto-Suggest Journal Tool | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | It can be challenging and time-consuming for NCHS staff to research journals and gather the pertinent information to guide a decision of which journal to target for publication. Important information to consider when identifying a target journal includes the journals field or subject matter, word limits, formatting and submission guidelines, whether the journal is open access, the impact factor, and acceptance rate. Existing tools use keyword searching, term frequencies, and word similarity scoring to identify potential journal matches, but AI presents the potential for a more effective approach that can consider more factors. | Reduction in researcher time spent searching through journal databases and websites to identify specific information about publication requirements. | The tool will output a list of the top matching journals along with key information about each journal, such as the field or subject matter of the journal, word limit, whether the journal is open access, impact factor, and acceptance rate. | The tool will output a list of the top matching journals along with key information about each journal, such as the field or subject matter of the journal, word limit, whether the journal is open access, impact factor, and acceptance rate. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Use of National Language Processing/Machine Learning to Identify Personal Identifiers in Health Center EHR Medication Data | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The NLP/ML processes being used are attempting to identify personal identifiers within EHR data fields that capture medication information such as name of medication and dosage of medication. | The expected benefits are that with the use of these techniques to remove any personal identifiers, more medication data can be made available in restricted use data files for researchers and interested persons to analyze, which would ultimately allow more robust data for studying medications administered/present during visits to health centers. | The initial output provides lists/tables of person identifiers that were identified by this tool for review and/or removal from the medication data fields. | The initial output provides lists/tables of person identifiers that were identified by this tool for review and/or removal from the medication data fields. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | AI-Assisted Extraction of Circumstance Information from National Violent Death Reporting System Narratives | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The National Violent Death Reporting System (NVDRS) compiles both quantitative and qualitative data on the circumstances surrounding violent deaths, including homicides and suicides, from three key data sources related to each death: the death certificate, the coroner/medical examiner report (including toxicology results), and the law enforcement report from the law enforcement agency that investigated the death. Data abstractors in state health departments across the U.S. abstract relevant information about each death from CME and LE reports, creating narratives that describe the most notable circumstances that contributed to the deaths captured in the system. Thus, much of the valuable contextual information about these deaths, such as details about chronic pain, interpersonal arguments, or other contributing factors, is embedded in free-text narratives and is not routinely abstracted into structured quantitative data fields. Manually extracting this information is labor-intensive, time-consuming, and subject to variability. The problem we are addressing with AI is the automated extraction of specific circumstance information from these unstructured narratives, enabling more comprehensive and systematic data analysis. | Automating the extraction of circumstance information from NVDRS narratives using AI brings several important benefits. First, it significantly reduces the time required to process narrative data. While manual abstraction is resource-intensive and can take hours or days to review thousands of records, AI can accomplish this task in a matter of minutes. This efficiency is especially critical given the scale of the challenge: the NVDRS captures data on over 70,000 violent deaths annually, making manual analysis of detailed free-text information in the CME and LE narratives for each of these incidents impractical. In addition to saving time, AI improves data quality and consistency by applying uniform criteria across all records, which helps to minimize human error and variability. Furthermore, by extracting additional details, such as information about chronic pain or the presence of arguments, AI enhances the surveillance capabilities of public health officials. This richer data enables a better understanding of risk factors and circumstances surrounding violent deaths, which in turn informs more effective prevention strategies. Collectively, these outcomes directly support CDCs mission to strengthen public health surveillance, guide prevention efforts, and ultimately reduce the incidence of violent deaths. | The AI system produces structured data outputs derived from the free-text narratives in the NVDRS. For each narrative, the system identifies and extracts predefined circumstance categories (e.g., presence of chronic pain, evidence of an argument, substance use) and outputs them as structured variables (e.g., binary indicators, extracted text snippets, or coded categories). These outputs can be integrated into existing NVDRS datasets, enabling further quantitative analysis and reporting. | The AI system produces structured data outputs derived from the free-text narratives in the NVDRS. For each narrative, the system identifies and extracts predefined circumstance categories (e.g., presence of chronic pain, evidence of an argument, substance use) and outputs them as structured variables (e.g., binary indicators, extracted text snippets, or coded categories). These outputs can be integrated into existing NVDRS datasets, enabling further quantitative analysis and reporting. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Leveraging GenAI for Efficient Review of CDC Programmatic Reports | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Manual qualitative review of text data within programmatic reports, such as Annual Progress Reports (APRs) submitted by Injury Control Research Centers (ICRCs) and Drug Free Communities (DFCs), is resource and time intensive. Across multiple use cases, generative AI (GenAI) and natural language processing (NLP) are being leveraged to automate the analysis of programmatic data. This approach streamlines the review and evaluation of various programmatic documents, improves efficiency, and supports the assessment of performance, progress, and challenges in funded activities. | By automating the extraction and analysis of critical information from programmatic data, generative AI is expected to significantly reduce the time required for manual coding and review. For example, initial applications have shown that AI can decrease manual review time from an estimated 35 hours to just 8 hours per topic, greatly enhancing efficiency. This time savings enables staff to focus on higher-level evaluations and strategic planning, improving the consistency and accuracy of assessments across multiple program areas. | The output from the AI-based framework consists of automated analyses and summaries of insights and patterns extracted from programmatic reports, such as APRs. The AI system highlights critical barriers, challenges, key themes, and trends identified within the data, providing structured summaries and actionable information. These outputs can be compared with manual qualitative analysis outcomes for validation and further refinement. As the framework evolves, the AI will be expanded to analyze additional sections of programmatic reports, including progress toward goals, program impact, and other relevant metrics, supporting comprehensive evaluation and reporting. | The output from the AI-based framework consists of automated analyses and summaries of insights and patterns extracted from programmatic reports, such as APRs. The AI system highlights critical barriers, challenges, key themes, and trends identified within the data, providing structured summaries and actionable information. These outputs can be compared with manual qualitative analysis outcomes for validation and further refinement. As the framework evolves, the AI will be expanded to analyze additional sections of programmatic reports, including progress toward goals, program impact, and other relevant metrics, supporting comprehensive evaluation and reporting. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Bridging the gap: Leveraging natural language processing to identify reasons for buprenorphine discontinuation in Electronic Health Records | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Life-saving treatment for opioid use disorder (OUD), such as the FDA-approved medication buprenorphine, remains underutilized. Buprenorphine has been shown to reduce illicit opioid use and risk of overdose mortality. Understanding treatment barriers can offer us opportunities for improved recovery. The PanTher Electronic Health Records (EHR) data from OptumLabs are a unique and important data asset, containing structured variables, such as diagnoses and procedures, laboratory measures, and medication records, as well as semi-structured data derived from clinical notes through natural language processing (NLP). The NLP-derived data contain helpful contextual information but have been difficult to use thus far. | The Data Science Upskilling Program advances a key focus of the agencys Data Modernization Initiative, i.e., that CDC's mission is to give all people the information they need for decision-making and wellbeing. Through participation in the Data Science Upskilling Program (DSU), the DOP-DSU team was able to extract actionable insights from EHR, contextualized further by supplementing with NLP-derived data from clinical notes. They developed an algorithm identifying patients with OUD who discontinued buprenorphine and used it to characterize discontinuation reasons using EHR. This has helped provide a fuller understanding of the what and the why surrounding discontinuation of this life-saving treatment, underscoring the need for strategies that improve retention in treatment. The team also built important DOP capacity in working with EHR data and NLP-derived data, including assessing data quality, and linking, processing, analyzing, visualizing and interpreting these data. | Through participation in the Data Science Upskilling Program (DSU), the DOP-DSU team was able to extract actionable insights from EHR, contextualized further by supplementing with NLP-derived data from clinical notes. They developed an algorithm identifying patients with OUD who discontinued buprenorphine and used it to characterize discontinuation reasons using EHR. They were also able to better understand limitations of NLP-derived data from provider notes in EHRs. However, despite the limitations of EHR, findings from this project can complement claims data and surveys from a patient care management perspective, and close the loop in our understanding of patients medication access journey. | Through participation in the Data Science Upskilling Program (DSU), the DOP-DSU team was able to extract actionable insights from EHR, contextualized further by supplementing with NLP-derived data from clinical notes. They developed an algorithm identifying patients with OUD who discontinued buprenorphine and used it to characterize discontinuation reasons using EHR. They were also able to better understand limitations of NLP-derived data from provider notes in EHRs. However, despite the limitations of EHR, findings from this project can complement claims data and surveys from a patient care management perspective, and close the loop in our understanding of patients medication access journey. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Automating Influenza Vaccine Virus Data Processing | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Generative AI | Information for candidate vaccine virus (CVV) data from public-facing websites is highly labor intensive to gather. Previously R scriptstext files with computer programming commandsautomate some scraping processes but still rely on manual methods to extract data from PDFs. | The automation from this AI use case reduced processing time for new data from about one week to one day, enabling faster access and analysis of CVV data to enhance CDCs preparedness for upcoming flu seasons. | The use case automates extracting key phrases, recognizing text, processing forms, and identifying entities to streamline data extraction and supplement missing CVV data from PDFs. This tool is intended for internal use only. | 25/04/2026 | b) Developed in-house | No | The use case automates extracting key phrases, recognizing text, processing forms, and identifying entities to streamline data extraction and supplement missing CVV data from PDFs. This tool is intended for internal use only. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Lineage Assignment by Extended Learning (LABEL) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identifying, classifying, and annotating influenza sequences. | Reliable and accurate clade assignment helps with downstream surveillance reporting and modeling. Accuracy is generally >98% and saves time through automation. | Sequence identifiers and clade annotations, intermediate data used in classification. | 14/01/2026 | b) Developed in-house | Yes | Sequence identifiers and clade annotations, intermediate data used in classification. | No | k) None of the above | Yes | https://github.com/CDCgov/label | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | https://data.cdc.gov/National-Center-for-Immunization-and-Respiratory-D/An-aggregated-dataset-of-serially-collected-influe/cr56-k9wj/about_data ; https://data.cdc.gov/National-Center-for-Immunization-and-Respiratory-D/An-aggregated-dataset-of-day-3-post-inoculation-vi/d9u6-mdu6/about_data | Enhancing influenza A risk assessment rubrics: leveraging predictive correlates and machine learning from in vivo experiments. | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The Pathogenesis Laboratory Team (Immunology and Pathogenesis Branch, Influenza Division, NCIRD) routinely performs influenza A virus (IAV) risk assessment studies in the ferret animal model, to assess IAV pathogenicity and transmissibility in a relevant small mammalian model. However, these studies are typically performed in isolation, with minimal efforts to comprehensively examine how each biological parameter obtained from the work correlates to disease severity and virus transmissibility and how these parameters can be used as a whole to improve risk assessment efforts. We have generated large sets of data collected from 25+ years of performing these in vivo studies, to identify predictive correlates associated with pathogenicity and transmissibility outcomes, and utilized machine learning approaches to better predict the potential public health risk posed by emerging influenza A viruses. | CDCs Influenza Risk Assessment Tool (IRAT) rubric is utilized to assess the pandemic potential of novel or emerging IAV that pose a threat to human health. A better understanding of which key quantifiable metrics of virus behavior in this species are most frequently correlated with virulence or transmissibility would greatly aid CDC leadership who score viruses in this rubric to ensure contributing data from the ferret model is rigorously and accurately contextualized within these risk assessments. As the project relies solely on previously collected in vivo data, it represents a valuable opportunity to support the 3 Rs of animal research (reduction, refinement, and replacement), gathering additional information from 25+ years of research in the ferret model already conducted at CDC, thus highlighting the agencys commitment to responsible and ethical animal research. Numerous peer-reviewed publications have already resulted from this work, including development of predictive models of lethal disease and virus transmissibility, and assessment of which parameters and sample types collected during routine laboratory experimentation offer highest predictive value in these models. These first-in-field analyses also provide an analytic framework and template for subsequent studies with other data collected at CDC. | The machine learning work we perform identifies which variables are more predictive for the associated pathogenesis or transmission outcome, which better informs us of the biology of the influenza-ferret model system for how to interpret the clinical and virological data we collect and better inform pandemic risk assessments. | 23/05/2026 | b) Developed in-house | No | The machine learning work we perform identifies which variables are more predictive for the associated pathogenesis or transmission outcome, which better informs us of the biology of the influenza-ferret model system for how to interpret the clinical and virological data we collect and better inform pandemic risk assessments. | https://data.cdc.gov/National-Center-for-Immunization-and-Respiratory-D/An-aggregated-dataset-of-serially-collected-influe/cr56-k9wj/about_data ; https://data.cdc.gov/National-Center-for-Immunization-and-Respiratory-D/An-aggregated-dataset-of-day-3-post-inoculation-vi/d9u6-mdu6/about_data | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | RepoAnalysis | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | We are trying to generate a summary of code repositories that are insufficiently documented. | This would allow people to search for code and reuse code that is not documented well enough for traditional search engines. | The output from the AI is a summary of what the code repository does based on source codes and/or README. | 25/01/2026 | b) Developed in-house | Yes | The output from the AI is a summary of what the code repository does based on source codes and/or README. | No | k) None of the above | Yes | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Data Standardization with LLM | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | The purpose of this project is to enhance data standardization efforts by leveraging large language models. to improve data cleaning and standardization processes, ultimately enhancing the overall efficiency and accuracy of data | Improve data cleaning and standardization processes within our data systems. ? Enhance the accuracy and reliability of data stored. Streamline workflows by automating certain tasks related to data cleaning and standardization. ? More specifically, the ultimate scope of this project includes integrating into the existing infrastructure of FluLIMS. The data as well as rules . For example, sometimes we have misspelled locations or various ways of referring to the same place (i.e., ATL vs Atlanta) and we would like to standardize that. ? By implementing this proposed project, we anticipate significant improvements in data cleaning and standardization processes within FluLIMS, leading to enhanced efficiency, accuracy, and overall effectiveness in managing flu-related information.? This will most likely lead to up to at least 50% reduction in time and effort to clean data. This can be used for other cleaning other data or other processes that LLM would be useful for. | This would result in a higher quality dataset with increased standardization and increasing usability of the insights for staff. | This would result in a higher quality dataset with increased standardization and increasing usability of the insights for staff. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Using Databricks Genie for Routine Immunization Data Insights | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Generative AI | The volume of immunization data reported quarterly is approximately 5 billion records. Analyzing this data to gather high-level insights requires complex coding and manipulation. By using a Large Language Model (LLM) based approach, the solution offers a plain language query capability that generates code to provide high-level insights without the need to move or create specific views for program needs. This approach is cost-effective, timely, and provides insights that can be used to improve data quality and inform data management planning by programs. | The use of a Large Language Model (LLM) based approach for generating code to analyze immunization data offers several promising benefits for the CDC's immunization program staff and data operation team: 1. Enhanced Efficiency and Time Savings: By enabling plain language queries, this approach significantly reduces the time and effort required for complex data manipulation and coding. This will allow program and data ops staff to focus on more critical tasks, potentially plan the data management tasks efficiently. 2. Improved Data Quality and Management: The insights generated can help identify data quality issues and inform better data management practices. This will lead to potentially more accurate and reliable data. 3. Cost-Effectiveness: Simplifying the analysis process reduces the need for extensive manual labor and specialized coding skills and compute costs. 4. Scalability: Handling approximately 5 billion records quarterly, this approach can scale to meet the demands of large datasets, ensuring timely and comprehensive analysis. | SQL Code, Reports, Visualization and Charts | 25/06/2026 | c) Developed with both contracting and in-house resources | Yes | SQL Code, Reports, Visualization and Charts | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Vaccine Tracking System (VTrckS) Conversational AI Pilot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | The VTrckS Conversational AI module will provide a user-friendly prompt driven approach to gaining insights into the VTrckS datasets. Current approach involves utilizing the SAP HANAs reporting modules and custom dashboards. The conversational AI approach will allow for a more human friendly metto quick insights without having to learn about the underlying data models. | This AI pilot aims to add efficiency for CDC awardees and CDC program staff in gathering insights from the VTrckS order, distribution, shipping, transfers and provider information available in the NCIRDs Advanced Business Intelligence Platform (NABIP). This tool will allow awardees to rapidly gather insights with no coding required both during routine operations and local or national emergency response. Some examples of data insights users gather from the data includes: - Assess active providers Determine if provider is complaint to obtain vaccines through VFC or 317 programs. - Provider Management Validate address and scope of providers offering to public. - Determine providers with specific vaccine availability Determine provider vaccine inventory unique to vaccines distributed. - Assess provider locations against vulnerable populations Quickly respond to routine or emergent request for information related to provider locals with key vaccine inventory. | The tool will output responses to prompts that allow for VTrckS program users and awardees on their data sets. For example: Input: How many providers are in the State of Washington Output: | 25/10/2026 | c) Developed with both contracting and in-house resources | Yes | The tool will output responses to prompts that allow for VTrckS program users and awardees on their data sets. For example: Input: How many providers are in the State of Washington Output: | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Beryllium exposure reconstruction | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Using machine learning to extract information about job title and task and assign relevant exposure codes. | Previously exposure codes were assigned manually by researchers. This method would increase consistency and accuracy of exposure coding and substantially reduce the amount of time needed to assign exposure codes and reduce exposure misclassification. | Exposure codes related to Beryllium exposure. | Exposure codes related to Beryllium exposure. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Site Audit AI Support (SAAIS) App | a) Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | NIOSH conducts numerous post-market activities to ensure that respirator configurations approved by NIOSH remain protective. Among these activities are audits of the manufacturing sites where the approved products are produced. These audits involved detailed evaluations of sites against quality assurance plans approved by NIOSH. Site are audited every few years and, thus, previous reports of nonconformances may be of particular interest to auditors. SAAIS is a prototype application being developed as an AI use case to inform: 1. CDC Office of the Chief Information Officer (OCIO)s implementation strategy for cloud-based enterprise services standalone AI tools and AI-enhanced systems 2. The specific enterprise approach, AI tools, and training techniques to be used by Respirator Approval System (RAS) developers when AI enhancements are eventually added following the completion of the base system within the Power Platform enterprise system. | Reduced time and errors and greater consistency related to audits of manufacturing sites used to produce NIOSH Approved respirators. | Four versions of the NPPTL App were developed, each introducing incremental features to enhance its functionality. Details are available upon request. Capabilities currently include: Drag-and-drop file uploads Multiple file uploads Clear navigation and guidance Download options for AI outputs Connection of evidence to CAR items Classification of non-conformances Support for Excel and email file uploads Enhanced feedback mechanisms and reporting features. | Four versions of the NPPTL App were developed, each introducing incremental features to enhance its functionality. Details are available upon request. Capabilities currently include: Drag-and-drop file uploads Multiple file uploads Clear navigation and guidance Download options for AI outputs Connection of evidence to CAR items Classification of non-conformances Support for Excel and email file uploads Enhanced feedback mechanisms and reporting features. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Respirator Selection Logic (RSL) Copilot | a) Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Workers rely on NIOSH Approved respirators to protect them from inhaling high-consequence particulate, gas, and vapor hazards. Some examples of these respiratory hazards include: Wildfire, structural, or surgical smoke Mold during post-flood remediation efforts Infectious diseases such as tuberculosis Chemicals used to clean or disinfect Particles liberated when cutting rock in industries such as construction and mining Selecting the correct respirator to protect workers requires knowledge of the hazard or hazards present, the job task, and the environment. NIOSHs Respirator Selection Logic (RSL) is a state-of-the-art tool designed to guide the selection of appropriate respiratory protection devices based on specific workplace hazards and conditions. The RSL requires users to enter detailed, task- and environment-specific information at multiple decision points to execute its logic correctly. | No current AI tool operationalizes the RSL while addressing the challenge of gathering and validating highly specific input information required at each decision point. The absence of such assistance leads to errors in respirator selection that can cause hazardous exposures, regulatory violations, and adverse health outcomes. Developing Ally to close this gap will improve the effectiveness of respiratory protection programs by ensuring that users supply accurate, relevant data to the RSL, thereby enhancing the quality and traceability of respiratory protection decisions aligned with established federal guidance. | Upon completion of this project, users of the RSL will be able to: Receive real-time guidance on what information is required for respirator selection and why it matters. Provide input in natural language rather than navigating technical documents or forms manually. Understand and apply the RSL more effectively, leading to fewer errors in respirator selection, improved compliance, and stronger respiratory protection outcomes. Use Ally as a decision support toolnot a decision makerto identify and clarify required inputs, understand the rationale behind each RSL step, and access authoritative guidance. The Copilot will always keep the user in the loop, helping them apply judgment while ensuring traceability to official sources like NIOSH and OSHA. This project will demonstrate how AI can support complex public health decision tools like the RSL while maintaining user accountability, transparency, and regulatory defensibility. | Upon completion of this project, users of the RSL will be able to: Receive real-time guidance on what information is required for respirator selection and why it matters. Provide input in natural language rather than navigating technical documents or forms manually. Understand and apply the RSL more effectively, leading to fewer errors in respirator selection, improved compliance, and stronger respiratory protection outcomes. Use Ally as a decision support toolnot a decision makerto identify and clarify required inputs, understand the rationale behind each RSL step, and access authoritative guidance. The Copilot will always keep the user in the loop, helping them apply judgment while ensuring traceability to official sources like NIOSH and OSHA. This project will demonstrate how AI can support complex public health decision tools like the RSL while maintaining user accountability, transparency, and regulatory defensibility. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | PPE Concerns Copilot | a) Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The PPE Concerns Mailbox received inquiries that NIOSH/NPPTL staff must review and respond to. To date, over 10,000 questions have been received with answers provided. Responding to an inquiry requires a multi-step process that staff must complete manually, including Reading the inquiry Logging the inquiry and information about the inquiry and inquirer in an Excel spreadsheet Searching through the spreadsheet of past questions Reviewing prior responses for relevance Drafting a reply based on similar previous responses or researching and composing a new answer from scratch if no match is found Sending response to additional staff if further subject matter expert review is needed Obtaining Executive Leadership review and approval, if needed Updating the spreadsheet with the finalized response, reply date, and staff who assisted with the response This process can be time-consuming, repetitive, and inconsistent, especially when multiple team members are handling and categorizing inquiries or when there are high volumes. | The Copilot could handle the initial stepsreviewing the question, searching past responses, and drafting a replywhich currently take the most time. For straightforward or simple, repeat questions, this could reduce staff time from 3060 minutes to under 10 minutes, with staff only needing to review and finalize the AI-generated draft. Even for more complex inquiries, having a well-structured starting point and a searchable interface would significantly cut down manual effort and improve turnaround time across the board. Additionally, the Copilot would improve time savings when a new staff member is assigned to managing the mailbox due to events such as staffing changes. The Copilot would allow a more seamless transition of the mailbox, whereas the current process requires months of on-the-job training to effectively navigate the spreadsheet and learn the proper standard responses to use for specific inquiries. Additionally, the Copilot would remove the burden of relying on memory recall to determine where a previous response is in the spreadsheet when staff receive specific, repeat questions. | The Copilot will analyze incoming questions, search the existing dataset for relevant responses, and generate draft replies for human review. After staff revise and approve the response, the Copilot will assist by sending the finalized email to the submitter and updating the spreadsheet with the new question-and-answer (Q&A) pair and supporting information. | The Copilot will analyze incoming questions, search the existing dataset for relevant responses, and generate draft replies for human review. After staff revise and approve the response, the Copilot will assist by sending the finalized email to the submitter and updating the spreadsheet with the new question-and-answer (Q&A) pair and supporting information. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Assessing public comment responses to draft NIOSH wildland fire smoke document | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Efficiently compile and synthesize the public comments received for a draft NIOSH document. | Make the NIOSH response to public comments more efficient and effective and reduce the time needed to review public comments. | Public comment responses compiled in various formats to make the response process more efficient. | Public comment responses compiled in various formats to make the response process more efficient. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Mining.AI | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Mining.AI is an innovative initiative led by a small team at NIOSH tasked to develop a domain-specific generative LLM AI platform focused on improving health and safety in the mining industry. The project consolidates NIOSH's latest peer-reviewed research and safety knowledge to support decision-making, hazard prevention, and incident response. | Accelerates Access to Critical Safety Knowledge Mining.AI enables instant retrieval of NIOSH mining research, technical reports, and best practices, replacing slow manual searches through multiple archives. Supports Faster, Safer Decision-Making Frontline professionals, engineers, and safety managers can query the AI to get tailored responses grounded in NIOSH-validated scientific evidence. Preserves Expertise Encodes decades of institutional knowledge into a reusable, interactive platform, mitigating the impact of staff turnover and retirements. Promotes Research Translation Converts dense, technical research into actionable language accessible to a wider range of users, including mine operators and workers. | A NIOSH specific internal chatbot tool that researchers can interact with to assist in the digestion of all previous NIOSH published articles. | A NIOSH specific internal chatbot tool that researchers can interact with to assist in the digestion of all previous NIOSH published articles. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | CSB MCP AI | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | Reduce required manhours and improve accuracy of reports on CSB infrastructure and by assisting infrastructure administrators with tasks and duties. | Expected to generate reports for and answer questions posed by senior staff about CSB infrastructure, improving accuracy and freeing infrastructure administrators from these tasks. Assist infrastructure administrators with tasks and duties, improving accuracy and completion time. | Reports, infrastructure code generation, and task execution. | Reports, infrastructure code generation, and task execution. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Support for MCP and AI APIs in Digital Gateway | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | Digital Gateway (CDC's API Platform) can support MCP and AI APIs. Given MCPs are a new addition to the ecosystem, the goal is to explore what it takes to implement MCPs internally. | Centralized Management: The Digital Gateway can act as a centralized point for managing interactions between AI agents and MCP servers. This simplifies the integration process and ensures consistent policy enforcement across all tools and data sources. Enhanced Security: By routing all requests through the Digital Gateway, the CDC can enforce security policies, access controls, and rate limiting. This helps protect sensitive health data and ensures compliance with regulatory requirements. Improved Data Governance: The Gateway can provide visibility into data usage and interactions, helping the CDC maintain robust data governance practices. This includes monitoring access, usage patterns, etc. | The output will be a test Model Context protocol. This will be an internal server which will have access to public only data to support future development of the related Digital Gateway infrastructure for future potential MCPs and APIs. | The output will be a test Model Context protocol. This will be an internal server which will have access to public only data to support future development of the related Digital Gateway infrastructure for future potential MCPs and APIs. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Delegations Repository Assistant | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The Delegations Library, which houses critical guidance on authorities and approvals, has been selected as the pilot use case due to its centralized structure and high relevance. Staff frequently report difficulty locating the right documents, resulting in delays and inefficiencies. This project will use CDCs internal EDAV platform to develop a chatbot assistant that helps users retrieve existing content more easily. The proof of concept will test technical feasibility, user experience, and potential for broader application to support CDCs operational mission. | Staff currently spend considerable time searching for this type of guidance, which slows administrative actions and diverts focus from core public health work. By streamlining access to internal policies and procedures, this proof of concept supports greater operational efficiency. While the chatbot does not directly impact health outcomes, it enables staff to redirect time toward critical public health priorities and lays the groundwork for applying similar tools across other business functions that support CDCs mission. | Staff will receive responses with information from existing delegation-related documents within the internal Delegations Library. The responses will include citations and references to the delegations library documents. | Staff will receive responses with information from existing delegation-related documents within the internal Delegations Library. The responses will include citations and references to the delegations library documents. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | OFR Robotics and Process Automation (ORPA) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Agentic AI | Reduction in human workload and increase in quality through automation of repetitive tasks. | Multiple Robotics and Process Automation (RPA) bots are in use. Most are simple automations of basic tasks (conversation of documents/file management). The two most significant bots focus on automating the sorting and distribution of thousands of invoices annually (Invoice Mailbox Management) and conducting government purchase card use validations monthly (PCARD Line Item Review). The result of the two bots are greater consistency/quality in the outputs and a reduction in human workload. The Invoice Mailbox Management bot processes >15,000 emails annually and the PCARD Line Item Review bot processes 3,000 5,000 transactions monthly against more than 200 unique business rules. | The Invoice Mailbox Management bot creates PDF files with the email and attached invoices/documentation into consolidated files for further processing by staff for payment into UFMS. The PCARD Line Item Review bot generates a list of transactions (from the full list of CitiBank credit card transactions) that have met specific business rules and may be a potential policy violation or need additional attention. The output also includes policy references and notes (specific to the transaction) to aid the reviewer in determining the next course of action for each item. | 24/05/2026 | c) Developed with both contracting and in-house resources | UIPath | Yes | The Invoice Mailbox Management bot creates PDF files with the email and attached invoices/documentation into consolidated files for further processing by staff for payment into UFMS. The PCARD Line Item Review bot generates a list of transactions (from the full list of CitiBank credit card transactions) that have met specific business rules and may be a potential policy violation or need additional attention. The output also includes policy references and notes (specific to the transaction) to aid the reviewer in determining the next course of action for each item. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | FERRET | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | High throughput processing of unstructured data necessary for mandated reporting. | Usage of programs to automate identification, extraction, and re-structuring of data will significantly decrease human involvement and processing time. Once deployed the time savings is anticipated to be on the scale of weeks to months. | Structured data deposited into a SQL database. | Structured data deposited into a SQL database. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Deep Research for Public Health | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Agentic AI | Public health agencies like the CDC face challenges in efficiently processing large volumes of complex information, conducting evidence-based research, and producing timely, high-quality analyses and reports to inform decision-making. Traditional workflows for tasks such as literature review, data analysis, policy evaluation, and communications are often time-consuming and resource-intensive. The problem addressed by integrating agentic AI models, such as OpenAIs Deep Research, is to enhance the efficiency, productivity, and rigor of these core public health functions by automating information retrieval, synthesis, and analysis, thereby enabling faster, more informed, and scalable decision-making while maintaining quality and transparency. | Empirical evidence from the report shows that the AI compressed tasks that would normally take days or months, into a single automated workflow, with 92% of subject matter experts reporting substantial productivity gains. Quantitative analysis from an internal study found that 94% of prompts resulted in successful, high-quality reports (median rating very good), with most completed in under 30 minutes. The AI demonstrated strong effectiveness in information retrieval, data analysis, and strategic planning across a wide range of public health domains, enabling faster, more informed decision-making and allowing CDC staff to focus on higher-level work that benefits public health outcomes. | The output from the AI system consists of detailed, report-style responses tailored to specific public health tasks and prompts. These reports typically include synthesized information from online sources, data analysis, summaries of scientific evidence, policy or legal analysis, and clear recommendations or findings, often with citations. The reports are structured, well-organized, and written in clear language, making them easy for CDC staff and subject matter experts to review and use. According to the evaluation, the outputs scored highly for clarity and reasoning transparency. | 25/04/2026 | a) Purchased from a vendor | OpenAI | Yes | The output from the AI system consists of detailed, report-style responses tailored to specific public health tasks and prompts. These reports typically include synthesized information from online sources, data analysis, summaries of scientific evidence, policy or legal analysis, and clear recommendations or findings, often with citations. The reports are structured, well-organized, and written in clear language, making them easy for CDC staff and subject matter experts to review and use. According to the evaluation, the outputs scored highly for clarity and reasoning transparency. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Federal Select Agents Program (FSAP) Customer Agent | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | FSAP users need support on the use of the FSAP system. Completing forms, managing entities, agents, and toxins. Updating permits, correcting data, conducting inspections, and related activities. The FSAP Customer Agent will provide answers to questions as users navigate the FSAP process. | Support will be provided to users without additional staffing requirements. Users will get fast and accurate solutions to questions instantly. | The output is text via an LLM which is trained on operations and management data. Output is via a both a chat window or copilot for internal use. | 25/03/2026 | b) Developed in-house | Yes | The output is text via an LLM which is trained on operations and management data. Output is via a both a chat window or copilot for internal use. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Publication Portfolio Analytics | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The Data Strategy and Analytics Team (DSAT) within OS employs natural language processing (NLP) topic modeling techniques to help programs identify common themes within their publication data. By combining these efforts with bibliometric analysis, we standardize the reporting of media attention, as well as policy and academic citations, by theme and/or CDC organization. Automating these efforts helps the CDC library to optimize their allocation of resources and avoid duplication of effort. | OS DSAT utilizes NLP to generate organization-specific reports and maintain an agency-wide dashboard. In the case of the agency-wide publication impact dashboard, NLP topic modeling was used to identify common publishing themes for the agency using 10+ years of CDC publication data. This allows users to see trends in publishing topics have changed over time and, when connected to media attention and citation data, allows communication teams, leadership, and scientists interested in assessing the impact of their programs publications. This dashboard has proven to be impactful, with 105 unique CDC staff across several CIOs and divisions using the dashboard between 7/1/2025 and 8/1/2025. | The outputs of the OS DSAT Publication Portfolio Analysis work include a pipeline/ PowerBI dashboard workflow and several center/divisional organizational specific reports and presentations. | 23/06/2026 | b) Developed in-house | Yes | The outputs of the OS DSAT Publication Portfolio Analysis work include a pipeline/ PowerBI dashboard workflow and several center/divisional organizational specific reports and presentations. | No | k) None of the above | Yes | Sample code for publication portfolio analytic activities can be found here: https://github.com/cdcai/analysis-bertopic-cdc-publications/blob/main/Topic%20Modeling/2024_CDC_Topic_Model_Code_Workbook.ipynb | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | https://stacks.cdc.gov/ | CDC Vault and Stacks Metadata Extraction | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Agentic AI | We are attempting to speed up the process of generating a digital metadata record for objects that will be curated and stored into either CDC Public Access Platform (Stacks) or CDC Vault. These two systems are built using the same software stack but one is for public data and the other is for non-public data. To create a metadata record solely with a human, the process takes about an hour per document. We are looking to improve the process to use AI to prepare the metadata record and reduce the human time to under 5 minutes. A secondary objective is to have a non-human process for the non-public data that will go into CDC Vault. | There are two primary paths and uses for the AI assisted pre-processing. The first is to improve the speed and effectiveness of human catalogers/librarians. Long term, we need to be able to process more data and require AI to improve this process so that humans are only working on critical steps and validation of the AI. This process is going from 60 minutes per document to <5 minutes per document. The second is to process federal records prior to a record being entered into CDC Vault and copied to NARA. This process will not have a human review as the final disposition is not public but we need to process a large number of files (100s of thousands to millions). This is simply not realistic to do via humans so this is a novel opportunity. | The AI will return up to 41 metadata elements (eg Title, Author, Subject, Description, Funding Source, Geographical Local). | 25/04/2026 | c) Developed with both contracting and in-house resources | Yes | The AI will return up to 41 metadata elements (eg Title, Author, Subject, Description, Funding Source, Geographical Local). | https://stacks.cdc.gov/ | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | AIP Assist | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | AIP assist supports user engagement with the Palantir Platform. It is an LLM-powered support tool that is able to answer inquiries about the platform and can also guide users in developing their own applications by supporting users to write their own code or pointing the user to the right tool for the task. It supports assistance for platform navigation, coding tasks, documentation support, and problem solving for users. | AIP Assist benefits the agency and general public by enabling users to quickly understand the platform which allows them to quickly ramp up the development of new applications. It supports assistance for platform navigation, coding tasks, documentation support, and problem solving for users when working on applications in many roles including data science, data engineering, machine learning, and AI. | AIP Assist is an LLM-powered tool available to all 1CDP. It provides accessibility to the user in relation to the capabilities of the platform by using generative AI and internal documentation. This tool works as an assistant to help users to understand the platform and helps them to quickly iterate over the development of new applications. The assistance provided by AIP Assist is text generated information to the user intended to be used for platform navigation, coding tasks, documentation support and problem solving. AIP assist provides generated text on the aforementioned aspects | 24/10/2026 | a) Purchased from a vendor | Palantir Technologies | Yes | AIP Assist is an LLM-powered tool available to all 1CDP. It provides accessibility to the user in relation to the capabilities of the platform by using generative AI and internal documentation. This tool works as an assistant to help users to understand the platform and helps them to quickly iterate over the development of new applications. The assistance provided by AIP Assist is text generated information to the user intended to be used for platform navigation, coding tasks, documentation support and problem solving. AIP assist provides generated text on the aforementioned aspects | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Center for Forecast and Analytics (CFA) Model Studio | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Other | This tool is intended to provide streamlined infrastructure around allowing users to bring their own models or discover existing ones and evaluate them in a streamlined way. For example, users can develop their models in their local machine, containerize them and deploy them using CFA Model Studio in the platform for further experimentation, parameter exploration and registry. | This tool is intended to extend the Natice Modeling Objectives capabilities on platform to reduce the effort and burden around bringing models on platform and especially allow for R based models for fine evaluation. Additionally, adds flexibility for users to utilize their preferred modelling language and tools | Model Library is a tool where users can go to discover, upload, and test their various models. The outputs are typically test runs, and the ability to evaluate model performance | 25/01/2026 | c) Developed with both contracting and in-house resources | Palantir Technologies | Yes | Model Library is a tool where users can go to discover, upload, and test their various models. The outputs are typically test runs, and the ability to evaluate model performance | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CM | AI-assisted comment triaging tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | To reduce labor hours of manual comment review we use AI to assist in comment review and triage, and in identifying form letters. | Cost savings and time savings to the government. | Compiles public comments by topic in the rule. | 22/06/2026 | c) Developed with both contracting and in-house resources | L&M Policy Research, LLC | No | Compiles public comments by topic in the rule. | Contractor uses prior year public comments to train the tool for the upcoming comment period. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Enhanced Direct Enrollment Outlier Detection | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | The Enhanced Direct Enrollment Outlier Detection use-case is not considered high-impact as it does not serve as the basis for decisions or actions that affect civil rights, liberties, privacy, critical resources, safety, or strategic assets. The AI models within this use-case use machine learning techniques to detect anomalous or inconsistent patterns relative to standard partner actions and other application channels outside of the EDE pathway. The findings are shared with the agency divisions in the form of different reports and tables. The data alone is not enough to determine fraud, however, can be used by CMS in tandem with other data to determine if CMS should take any corrective actions. CMS makes all determinations on actions. | Classical/Predictive Machine Learning | (Marketplace) Enhanced Direct Enrollment (EDE) allows consumers to apply for and enroll in an exchange plan directly through an approved partner's UI, without being redirected through the Healthcare.gov application. These partner systems directly interface with the APIs developed by the FFE. As EDE partners gain more control over their application process, the FFE must ensure program integrity. | Implement ML to identify anomalies/quality issues with partner-submitted person, application, and policy data?. | Ensure FFE EDE program integrity | Ensure FFE EDE program integrity | |||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCSQ | Feedback Analysis Solution (FAS) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Speeds up the manual and time-consuming process of analyzing public comments on frequently posted regulations.gov and FDMS dockets by employing advanced natural language processing and machine learning technologies. | Speeds up the manual and time-consuming process of analyzing public comments on frequently posted regulations.gov and FDMS dockets by employing advanced natural language processing and machine learning technologies. FAS help categorize stakeholder comments or feedback (collected in multiple venues), thereby enabling analysts to use the system to quickly identify comments that may impact program/policy decisions. | FAS categorize stakeholder feedback (collected in multiple venues), thereby enabling CMS analysts to use the system to quickly identify comments that may impact program/policy decisions. The system utilizes Artificial Intelligence (AI) to minimize bias through topic, theme, stakeholders, and sentiment models that standardize the analysis process, and provide insights that were previously difficult to obtain manually. | 21/09/2026 | b) Developed in-house | Yes | FAS categorize stakeholder feedback (collected in multiple venues), thereby enabling CMS analysts to use the system to quickly identify comments that may impact program/policy decisions. The system utilizes Artificial Intelligence (AI) to minimize bias through topic, theme, stakeholders, and sentiment models that standardize the analysis process, and provide insights that were previously difficult to obtain manually. | Use API to comments from Regulations.gov, FDMS, Federal Register. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | IT System Utilization Optimization | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Marketplace) The FFE application usage patterns vary and are dependent on differing environment usage periods. CMS resources are currently manually scaled, not allowing immediate actions in correlation with usage changes. | Implement ML to determine optimized infrastructure/application scaling to support system volume. | Automation of application scaling | Automation of application scaling | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Risk Adjustment Outlier Analysis | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The risk adjustment (RA) program spreads the financial risk borne by Issuers due to offering a variety of plans meeting the need of the diverse population. RA payments are distributed based on population risk levels. CCIIO uses a distributed data solution (EDGE servers) to calculate plan average actuarial risk and associated RA transfers and must avoid potential program integrity risks to annual calculations. | Implement ML to identify outliers in issuer data that may unduly influence risk adjustment transfers. | Maintain RA program integrity | Maintain RA program integrity | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Agent/Broker Fraud Analysis (ABCQI) | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | The Agent/Broker Fraud use-case is not considered high-impact as it does not serve as the basis for decisions or actions that affect civil rights, liberties, privacy, critical resources, safety, or strategic assets. The AI models within this use-case use machine learning techniques to detect patterns that are inconsistent or anomalous relative to standard consumer, agent broker, and partner actions. Our findings are shared with the agency divisions in the form of reports and tables and can be used in combination with additional information derived outside of this AI tool to determine if CMS should take any corrective actions. All outcomes are internal facing. CMS makes all determinations on actions. | Classical/Predictive Machine Learning | (Marketplace) Agents and brokers support the consumer enrollment and eligibility process. Because of this, they have learned the intricate details of the Federally-facilitated Exchange (FFE) for accessing applications, submitting eligibility determinations, and adding enrollments to their line of business, opening up the possibility of fraud. | Implement Machine Learning (ML) to identify potential fraud/waste/abuse within Agent/Broker data. | Reduce waste, fraud, and abuse | Reduce waste, fraud, and abuse | |||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCSQ | CCSQ ServiceNow AI Search | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI Search index stores data from ServiceNow AI Platform® records or external sources and makes that data available for users to search in multiple applications. Search query features use ServiceNow AI Platform technologies to improve the search user experience. | CCSQ ServiceNow AI Search is a product inside ServiceNow (SaaS). It replaces traditional Zing search tool (exact search), and enables users with more flexible searches to get relevant and actionable answers quickly. * Improve search relevance * Promote self-service by empowering users to find information independently, and potential reduce number of cases | AI Search will * Display most relevant results first * Support synonyms, auto-corrections, stop words, and auto-completion AI Search analytics will provide insights into search usage, performance, trends, metrics, and how to improve search experiences. Plan to do AI Search tune-up next | 24/04/2026 | a) Purchased from a vendor | Yes | AI Search will * Display most relevant results first * Support synonyms, auto-corrections, stop words, and auto-completion AI Search analytics will provide insights into search usage, performance, trends, metrics, and how to improve search experiences. Plan to do AI Search tune-up next | CCSQ ServiceNow Database | Yes | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OPOLE | Complaint Analysis | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Conduct high volume case analysis to identify root causes/trends that can be validated by SMEs and act as a workforce multiplier. | Reduce repeat issues delaying benefits or access to care, and improve health plan compliance with federal rules and regulations. | Identify trends, applicable regulatory citations, sample for validation, and recommended next steps. | 24/02/2026 | b) Developed in-house | Yes | Identify trends, applicable regulatory citations, sample for validation, and recommended next steps. | data extracts from HPMS/CTM, and manually validated results based on standard criteria | Yes | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Help Desk Responses | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | (Marketplace) The Division of Issuer Management and Operations has talked with their contractor, LMI, about using an AI tool for the Help Desk contract. | Generate responses to common questions from issuers and external organizations based on previously cleared material. | Reduce the amount of time staff contractors need to generate answers for SME review and approve responses to issuers and other external entities that ask questions to the help desk. | Reduce the amount of time staff contractors need to generate answers for SME review and approve responses to issuers and other external entities that ask questions to the help desk. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCSQ | Resource Library Assistant | a) Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Users being able to find information related to program participation. | Improving access to program information and reducing service desk burden and improving user experience for searches. | The AI pulls responses from preapproved documents that were fed into the system. It is not a learning model at this time. | The AI pulls responses from preapproved documents that were fed into the system. It is not a learning model at this time. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Chatbot within Hub | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Other) Currently the following help support activities are handle by a human: - Personalized FAQ?s - Clarify questions with schema and onboarding, etc. - Handle "Where is 'My file' or 'My Request'?" inquiries - Provide a "Talk to Agent" feature - Schedule a testing window - Provide data summarization and reporting for internal stakeholders - Report operational health of the system - Include training materials, Q/A about the system | Build a chatbot that will address some help support activities through: 1. Access to Internal Knowledge base Including retrieval augmented generation (RAG) 2. Personalized FAQs & contextual generation 3. Interaction with Live Agent 4. Chat & Talk Feature 5. Ability to query custom data source for File or Case status | Help support team in day-to-day communication with external partners | Help support team in day-to-day communication with external partners | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OPOLE | Medicare Part C/D Marketing Material Review | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Expand the volume of materials reviewed while providing consistent insights into trends and issues with materials received. | reduce cycle time and increase volume reviewed | guided recommendations | guided recommendations | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCSQ | QPP Admin Bot | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This AI is to help with Code complexity. | In general we expect to see a decrease to lines of code, increase efficiency of code, increase developer output and reduced story points for work which in turn produces cost savings. | Recommendation. | Recommendation. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OAGM | CMS Labor Analysis Wizard (CLAW) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | CLAW provides semi-automated analysis and insights into contractor-proposed labor for evaluation and negotiation purposes, enabling OAGM to make more informed procurement decisions that optimize contract terms and enhance the efficiency of the federal acquisition process. | CLAW enables OAGM to make more informed procurement decisions that optimize contract terms and enhance the efficiency of the federal acquisition process. | CLAW outputs normalize labor category classifications, historical price trend analysis, and comparative insights on contractor-proposed labor rates to support OAGM's contract evaluation and negotiation processes. | 25/05/2026 | c) Developed with both contracting and in-house resources | Skyward Solutions | Yes | CLAW outputs normalize labor category classifications, historical price trend analysis, and comparative insights on contractor-proposed labor rates to support OAGM's contract evaluation and negotiation processes. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCSQ | CCSQ Now Assist for CSM | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | CCSQ Now Assist for CSM (Customer Service Management) is a product inside ServiceNow (SaaS). It integrates Gen AI with CSM, and uses Now LLM (ServiceNow native Large Language Model) to generate contents based on machine learning (ML). Now Assist for CSM helps agents to improve productivity and efficiency and deliver better services. Improve agents' responsiveness and productivity * Quickly get familiar with a case/chat by getting case/chat summarization * Quickly resolve the case by using auto-generated resolution notes. | CCSQ Now Assist for CSM (Customer Service Management) is a product inside ServiceNow (SaaS). It integrates Gen AI with CSM, and uses Now LLM (ServiceNow native Large Language Model) to generate contents based on machine learning (ML). Now Assist for CSM helps agents to improve productivity and efficiency and deliver better services. Improve agents' responsiveness and productivity * Quickly get familiar with a case/chat by getting case/chat summarization * Quickly resolve the case by using auto-generated resolution notes. | Agents have quick access to - Case summarization - Chat and agent hand-off summarization - Resolution Notes Generation | 24/12/2026 | a) Purchased from a vendor | Yes | Agents have quick access to - Case summarization - Chat and agent hand-off summarization - Resolution Notes Generation | Historical CCSQ ServiceNow case data | Yes | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | AI Workspace | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Hosted development env with access to AI tooling, including LLMs for use in early stage exploration. | AI Workspace provides a hosted development environment with access to AI tooling including LLMs, enabling CMS teams to conduct early-stage AI exploration and experimentation that can lead to innovative solutions | AI Workspace provides a hosted development environment that enables end users to create code, with the organizational impact being rapid exploration of AI ideas to validate technical feasibility, viability, and desirability of potential solutions. | 25/04/2026 | c) Developed with both contracting and in-house resources | Skyward Solutions | Yes | AI Workspace provides a hosted development environment that enables end users to create code, with the organizational impact being rapid exploration of AI ideas to validate technical feasibility, viability, and desirability of potential solutions. | Yes | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCSQ | Citation Analysis and Survey Assistant (CASA - Nursing Home Survey CMS 2567) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | CASA enhances the efficiency and effectiveness of monitoring and reviewing nursing home surveys across the US. It enhances how the Quality, Safety, and Oversight Group (QSOG) and Survey Operations Group (SOG) assess how State Survey Agencies (SSAs) are citing nursing home deficiencies, reported on CMS Form 2567, by employing advanced natural language processing and machine learning technologies. | Speeds up the manual and time-consuming process of survey review and citing nursing home deficiencies, reported on CMS Form 2567, by employing advanced natural language processing and machine learning technologies. | An application which provides Nursing Home Oversight groups an interface to view, track and utilize all the features supported by aforementioned ML/AI powered processes. | 24/11/2026 | b) Developed in-house | Yes | An application which provides Nursing Home Oversight groups an interface to view, track and utilize all the features supported by aforementioned ML/AI powered processes. | It uses data from Nursing Home Care Compare website. A model/process that uses an LLM and few-shot learning to identify Extent and Sample from deficiency text. Accuracy metrics generated by comparing development outcome with labeled data collected from Labeling jobs. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Independent Dispute Resolution (IDR) Eligibility Rules Engine | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The current Independent Dispute Resolution (IDR) Technical Assistance (TA) process is very manual and time intensive which limits throughput. The rules engine should significantly expedite processing and expand capacity. Automating more of the process may also increase consistency across recommendations and result in a more predictable timeframe for the workflow by better positioning disputes for analyst review. | The AI tool will help automate the eligibility review process, reducing time-intensive manual steps and increasing consistency of results. | Use of artificial intelligence (AI) models to identify the presence or absence of necessary data points within documentation. AI tool searches documentation to identify and store necessary data points such as document title, file type, payment date, service code, claim number, and date of service. | 25/03/2026 | a) Purchased from a vendor | Yes | Use of artificial intelligence (AI) models to identify the presence or absence of necessary data points within documentation. AI tool searches documentation to identify and store necessary data points such as document title, file type, payment date, service code, claim number, and date of service. | Federal Independent Dispute Resolution (IDR) Dispute Data | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Review Regulatory Comments | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Marketplace) Public comment analysis in the Letter to Issuers (LTI) and Notice of Benefits and Payment Parameters (NBPP) to streamline review of comments. | Identify trends in responses from commenters in policy and operational issues proposed by CMS. | Reduce the amount of time and resources CCIIO and contracting staff would need to review the LTI and NBPP by an estimated 25%. | Reduce the amount of time and resources CCIIO and contracting staff would need to review the LTI and NBPP by an estimated 25%. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Improved Data Quality Checks | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Marketplace) Improve consumer experience by providing an asynchronous (not at the same time) solution to detect and provide near real time feedback via outreach to the consumer, shortening the overall return cycle time without requiring UI changes. | Develop a POC classifier model to identify incorrect document upload types / low-quality images through use of optical character recognition (OCR). | Improvement of user experience | Improvement of user experience | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CMCS | Performance Metrics Database and Analytics (PMDA) | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCSQ | CCSQ Now Assist for Creator | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | Improve efficiencies regarding development withing the CCSQ ServiceNow program. | Improve time to deliver customer value through improved efficiencies regarding development withing the CCSQ ServiceNow program. | Text to Code. Text to Flow. Flow Assist | 24/10/2026 | a) Purchased from a vendor | Yes | Text to Code. Text to Flow. Flow Assist | Peer reviews | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CM | Docketscope Public Comment Processing | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Docketscope is the platform used to triage public comments submitted on the OPPS/ASC and ESRD PPS proposed rules. | Allows division staff to compile, organize, and process thousands of comments that inform final rulemaking. This AI functionality may contribute to greater efficiencies in reviewing public comments by automating the process of identifying identical comments and allowing for quicker processing of comments with similar themes. | The AI functionalities used in Docketscope include a clustering feature that automatically groups similar public comments together using text analysis and heuristic techniques, an issue mapping functionality that depends on machine learning to render HTML document versions, and a bulk processing comment feature which uses a rules-based system and logic programming to identify public comments relevant to a specific topic. | 23/04/2026 | a) Purchased from a vendor | No | The AI functionalities used in Docketscope include a clustering feature that automatically groups similar public comments together using text analysis and heuristic techniques, an issue mapping functionality that depends on machine learning to render HTML document versions, and a bulk processing comment feature which uses a rules-based system and logic programming to identify public comments relevant to a specific topic. | Public comments from regulations.gov | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | iVeri-Fi (Test) | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | In October 2024, Serco will begin utilizing iVeri-Fi, (a decision service platform) to perform automated processing of remote identity proofing (RIDP) verification tasks. These tools are already in our Eligibility Workers Support System (EWSS) stack and have an existing Authority to Operate (ATO). We are not introducing any new technologies - we are just changing how and where the work is done through automation (before Task Inconsistency Processing System (TIPS) not integrated with TIPS). | A decision service (Sapiens) will make the adjudication decision using Remote Identity Proofing (RIDP) business rules. This service integrates with Event-Based Processing (EBP) microservices, Sapiens Decision, and Rosette Name Indexer (RNI) for matching identity data. Sapiens Decision uses AI to ensure consumer and RIDP data match and will incorporate more machine learning in the future. | Significant reductions in operational costs, increased efficiency in task processing, improved quality and consistency in decision-making, and enhanced user experience for eligibility support workers by reducing their manual workload. The system also aims to facilitate easier updates and modifications, supporting ongoing improvements and expansions of automation. | Significant reductions in operational costs, increased efficiency in task processing, improved quality and consistency in decision-making, and enhanced user experience for eligibility support workers by reducing their manual workload. The system also aims to facilitate easier updates and modifications, supporting ongoing improvements and expansions of automation. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OHI | AI-Powered Meeting Notes for MAG Hearings and AIRC Sessions | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to address process redundancy by streamlining repetitive tasks, reducing manual effort, and minimizing duplication across workflows. This allows teams to operate more efficiently and focus on higher-value activities. | The goal is to reduce manual note-taking, improve accuracy, and ensure timely documentation for case management and compliance. | The outputs will include automated workflows, consolidated reports, and task completion logs that eliminate repetitive manual steps and streamline operational processes. | The outputs will include automated workflows, consolidated reports, and task completion logs that eliminate repetitive manual steps and streamline operational processes. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OHI | AI-Generated Pre-Briefs for MAG Hearings and AIRC Sessions | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Utilize an AI system to compile structured pre-briefs for MAG Hearings and AIRC sessions. | The goal is to provide Hearing Officers with a concise, data rich overview of the appeal case, enabling more focused and efficient sessions. | A standardized, data-rich case summary for each appeal, including key facts, timelines, relevant documentation, and decision history, presented in a concise format optimized for Hearing Officer review. | A standardized, data-rich case summary for each appeal, including key facts, timelines, relevant documentation, and decision history, presented in a concise format optimized for Hearing Officer review. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OHI | Knowledge Management Solution for Appeal Case Workers | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Utilize an AI solution to create a centralized, AI-enhanced Knowledge Management system to support case workers by providing quick access to relevant SOPs, policy guidance, workflows, forms, and call scripts | The goal is to improve consistency, reduce research time, and enhance the quality of appeal processing. | Standardized appeal processing templates, centralized reference materials, and automated case data summaries that provide consistent information, reduce the need for manual research, and support higher-quality decision-making. | Standardized appeal processing templates, centralized reference materials, and automated case data summaries that provide consistent information, reduce the need for manual research, and support higher-quality decision-making. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | CMS Chat | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | Workforce productivity and operational efficiency | The expected benefits include significant productivity gains and enhanced operational efficiency through streamlined work. | CMS Chat generates text-based responses including drafted content for emails and reports, document summaries and analysis, synthesized findings, brainstormed ideas, and answers to queries - all delivered through a conversational interface . | 24/12/2026 | c) Developed with both contracting and in-house resources | Skyward IT Solutions | Yes | CMS Chat generates text-based responses including drafted content for emails and reports, document summaries and analysis, synthesized findings, brainstormed ideas, and answers to queries - all delivered through a conversational interface . | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OFM | Artificial Intelligence Use Case Pilot in Medical Review on Medicare Fee-For-Service (FFS) Improper Payment Measurement (Comprehensive Error Rate Testing Program) Data | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | OFM is hoping to leverage cutting-edge AI technology to potentially introduce new efficiencies and accuracy in the Comprehensive Error Rate Testing (CERT) medical review process by moving away from manual medical review which is costly, sometimes inaccurate, and inefficient. | Expected benefits include cost savings, new efficiencies and accuracy of clinical medical review decision making for reporting meaning Medicare FFS improper payment improper payments. | Recommendation | Recommendation | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Content Analysis POC | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Content analysis POC for the automated analysis of qualitative data including comments, complaints, stakeholder interviews, surveys, etc. | The content analysis POC will enhance productivity and operational efficiency by automating the analysis of qualitative data from comments, complaints, stakeholder interviews, and surveys, enabling CMS staff to process large volumes of feedback more quickly and systematically to improve healthcare programs and services. | The AI system outputs coded qualitative data and thematic analysis results, identifying patterns, themes, and insights from comments, complaints, stakeholder interviews, and surveys to support evidence-based decision-making. | 25/07/2026 | b) Developed in-house | No | The AI system outputs coded qualitative data and thematic analysis results, identifying patterns, themes, and insights from comments, complaints, stakeholder interviews, and surveys to support evidence-based decision-making. | No | k) None of the above | Yes | ||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Enterprise Architecture LLM for CMS Regulatory Content | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Enhance the Enterprise Architecture (EA) Knowledgebase by creating a centralized, searchable repository of CMS regulatory knowledge, and make complex regulatory information available within the EA environment. | Provides insight into how a law, regulation, policy or guidance will impact CMS programs, business functions, stakeholders, and systems. | Information for, and relationships between CMS systems, business functions, and regulatory information | Information for, and relationships between CMS systems, business functions, and regulatory information | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CMCS | T-MSIS Prima | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | T-MSIS program objectives, facilitates data driven decisions, and IT acceleration | T-MSIS Business Outcomes Acceleration: Team Sprint Velocity Acceleration, Faster IT feature delivery, High-level Task Automation, Better Customer Outcomes Initially faster responses to customer assistance | Code, Documentation, Communications, Analysis/Research | Code, Documentation, Communications, Analysis/Research | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CM | Health Plan Management System - Complaint Tracking Analysis | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Support the analysis of MA and Part D beneficiary complaints data. | Enhance CMS' understanding of beneficiary issues with MA and Part D plans. | Complex analysis of a large set of complaint data to identify trends in order to facilitate casework activity. | Complex analysis of a large set of complaint data to identify trends in order to facilitate casework activity. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OFM | Knowledge Management Solution | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This AI-driven system addresses a critical challenge in government operations - preserving institutional knowledge. | Key benefits include knowledge preservation by capturing and maintaining complex expertise, and resource development that helps onboard new staff and upskill existing team members. Additionally, operational efficiency is enhanced by enabling teams to accomplish more with existing resources. Continuity is ensured by reducing knowledge loss when experienced staff transition. | Contextual answers and recommendations and reference documentation | Contextual answers and recommendations and reference documentation | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OFM | ASSIST Tool | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This tool focuses on strategic alignment and mission effectiveness across the team. | This tool focuses on strategic alignment and mission effectiveness by ensuring that work activities directly support CMS's strategic objectives through Strategic Framework Integration. It maintains organizational direction and priorities with a Mission Focus. Furthermore, it drives performance improvements and best practices through Operational Excellence and demonstrates how daily work contributes to broader CMS goals with accountability. | The output provides analysis and recommendations based off of stated operational achievements. | The output provides analysis and recommendations based off of stated operational achievements. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OFM | Executive Order Gap Analysis Tool | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | A specialized compliance and analysis solution that analyzes Executive Orders and program impact. | The solution assists the program in compliance and analysis. It systematically reviews new Executive Orders and compares them against existing business and technical requirements. It identifies gaps where current processes might need adjustments and supports compliance to ensure CMS operations align with federal mandates. | Provides recommendations and gap analysis. | Provides recommendations and gap analysis. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OFM | Contract Invoice Analyzer Assistant | a) Pre-deployment The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Uses AI and automation to provide a more comprehensive review of contract invoices, mitigating waste, fraud, and abuse. | Contract Invoice Analyzer is an AI-powered financial oversight tool designed to provide a comprehensive analysis of contract invoices. It automates part of the review process, identifying potential patterns of waste, fraud, and abuse. Additionally, it helps optimize contract spending and oversight. | Provides recommendations and analysis | Provides recommendations and analysis | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | AI Code Assistance WETG GitHub CoPilot Proof-of-Concept | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | Coding assistance | Cost savings, improved efficiency and quality | Suggestions | 25/08/2026 | a) Purchased from a vendor | No | Suggestions | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | AI Search in Google Vertex (discovery) | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Improved search | Improved customer experience | Suggestions | Suggestions | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | AI in Slack | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | Improved Slack | Improved operations | Suggestion | 25/07/2026 | a) Purchased from a vendor | Yes | Suggestion | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | AI in Figma Make | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | Rapid prototyping | Improved speed and efficiency | Suggestion | 25/07/2026 | a) Purchased from a vendor | No | Suggestion | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | AI Language Translation Support in Smartling | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | Improved translation | Cost savings, improved efficiency and quality | Suggestion | 24/09/2026 | a) Purchased from a vendor | No | Suggestion | No | Unpublished | k) None of the above | No | Unpublished | ||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | AI Web Content Search Engine Optimization (SEO) in Drupal | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Search engine optimization | Cost savings, improved efficiency and quality | Suggestion | Suggestion | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | Medicare AI Customer Insights | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | Improved customer insights | Cost savings, improved efficiency and quality | Suggestion | 25/08/2026 | c) Developed with both contracting and in-house resources | commonFont and AWS | Yes | Suggestion | N/a | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | Medicare AI Drug Search | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Government Benefits Processing | Pilot | c) Not high-impact | Not high-impact | Generative AI | Improved customer experience | Improved access to government benefits | Suggestion | 25/08/2026 | c) Developed with both contracting and in-house resources | Oddball and AWS | Yes | Suggestion | Yes | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | Marketplace Qualified Health Plan Benefit AI Assistant (discovery) | a) Pre-deployment The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Improved customer experience | Improved benefits selection | Suggestion | Suggestion | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | AI Automation within the WETG Web Help Service Desk (discovery) | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Improved help desk service | Cost savings, improved efficiency and quality | Suggestion | Suggestion | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/EPRO | EPRO HTA Reporting | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Text Network Analysis POC | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | RFI Comment Analysis POC | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Addresses the challenge of efficiently analyzing and extracting insights from large volumes of comments in response to an RFI. | The AI solves the challenge of efficiently conducting qual data analysis on RFI comments. | The system produces coded data and conducts thematic analysis. | 25/07/2026 | b) Developed in-house | Yes | The system produces coded data and conducts thematic analysis. | No | k) None of the above | Yes | ||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | AI Agent Orchestrator POC | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI Agent Orchestrator solves the problem that current chatbots are limited to question-and-answer interactions and cannot orchestrate complex, multi-step analytical workflows that require calling external data science notebooks and tools in the correct sequence to solve sophisticated problems. | The AI Agent Orchestrator will transform CMS's analytical capabilities by enabling any staff member to execute sophisticated, multi-step data science workflows through natural language interactions, eliminating the current barrier of requiring specialized technical expertise to access complex analytical tools and notebooks. | The output is a natural language interface that enables users to easily interact with complex data science tooling and analytical workflows without requiring specialized knowledge of the underlying systems | 25/07/2026 | c) Developed with both contracting and in-house resources | Noblis | Yes | The output is a natural language interface that enables users to easily interact with complex data science tooling and analytical workflows without requiring specialized knowledge of the underlying systems | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | MCP Server Registry and Integration Platform | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | The MCP Server Registry solves the problem that CMS teams currently lack a centralized, standardized platform to develop, share, and integrate specialized AI tools and capabilities across the organization | The MCP registry and integration platform will support agentic AI tool development across CMS by enabling any team to build and share specialized capabilities through standardized MCP servers | The AI system outputs a centralized registry platform that enables CMS teams to discover, register, and integrate MCP servers through standardized protocols. | The AI system outputs a centralized registry platform that enables CMS teams to discover, register, and integrate MCP servers through standardized protocols. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Agentic Web Search | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | The AI-powered web search agent solves the problem that CMS's custom chatbot, CMS Chat currently lacks internet access and is limited to training data with a cutoff date, preventing staff from accessing real-time information | The web search agent will significantly enhance the chatbot's capabilities by providing access to real-time information | The agent will pull real-time web search results into CMS Chat context | The agent will pull real-time web search results into CMS Chat context | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Deep Research Multi-Agent System | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | The Deep Research multi-agent system solves the problem that CMS employees currently cannot conduct research directly within CMS Chat. | The Deep Research system will enable CMS employees to conduct, multi-faceted research directly within CMS Chat by automatically decomposing complex queries into targeted subqueries, searching across web and internal data sources simultaneously | The AI system outputs research reports delivered directly within CMS Chat's conversational interface | The AI system outputs research reports delivered directly within CMS Chat's conversational interface | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | 508 Compliance Review MCP Server | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The 508 Compliance Review MCP server solves the problem that CMS staff currently must manually review documents for Section 508 accessibility compliance, which requires specialized knowledge of accessibility standards, is time-consuming, and may result in inconsistent evaluations | The 508 Compliance Review system will enable CMS staff to automatically evaluate documents for accessibility compliance through CMS Chat and other systems | The AI system outputs detailed 508 compliance assessments delivered through CMS Chat and possibly other systems. | The AI system outputs detailed 508 compliance assessments delivered through CMS Chat and possibly other systems. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | PubMed Literature Review MCP Server | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Agentic AI | The PubMed Literature Review MCP server solves the problem that CMS teams need to carry out various types of literature reviews, commonly using PubMed data, but lack an internal tool for doing this at greater scale than manual effort allows | Enable CMS teams to conduct literature reviews at scale through an internal tool that automates PubMed data analysis and synthesis | Literature reviews and research syntheses that combine PubMed data with other contextual information | 25/08/2026 | c) Developed with both contracting and in-house resources | Skyward IT Solutions | Yes | Literature reviews and research syntheses that combine PubMed data with other contextual information | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | SAM.gov RFI Comment Analysis POC | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The SAM.gov RFI Comment Analysis POC solves the problem that CMS procurement staff currently must manually review and score vendor responses to Requests for Information (RFI) posted on SAM.gov, which is time-intensive, especially when there are many questions and many vendors responding. | The RFI Comment Analysis POC will enable CMS teams to systematically evaluate vendor responses using AI-powered analysis, improving the efficiency of assessments. | The AI system outputs vendor scoring and ranking reports | 25/08/2026 | b) Developed in-house | Yes | The AI system outputs vendor scoring and ranking reports | No | k) None of the above | Yes | ||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Plan Justification Tool | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Marketplace) Identify trends/patterns within plan certification justification templates, containing free-form text data through use of natural language processing (NLP) and classification techniques. | Build a supervised ML model using historical justification data and associated plan certification outcomes that can be recommended to CMS to build towards a more efficient review process. | Improve efficiency of the plan certification review process - by automating the initial justification review along with a rendering of a verdict and then looping the human in for the final decision. | Improve efficiency of the plan certification review process - by automating the initial justification review along with a rendering of a verdict and then looping the human in for the final decision. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | TIC URL Automation | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Marketplace) Issuers applying for QHP (Qualified Health Plan) certification, including issuers offering off-Exchange SADPs (stand alone dental plans), must submit a Transparency in Coverage URL in MPMS that leads to a page on the issuer's website where required information is posted. The current review process requires reviewers to manually read and verify the language presented on each TIC URL for compliance, which is both time-consuming and labor-intensive. | AI can expedite the process by enabling AI algorithms to rapidly scan URL content, to identify if the required language is present and compliant, reducing the review time compared to manual reading. Additionally, AI can manage and review large volumes of URL content, further accelerating the review process. | Reduce manual review, apply uniform review criteria across all URLs, ensuring that the review is consistent and free from the variability that can occur across multiple reviewers | Reduce manual review, apply uniform review criteria across all URLs, ensuring that the review is consistent and free from the variability that can occur across multiple reviewers | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Rx Data Integrity Review | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The current process for conducting the Rx Data Integrity review is time intensive and requires reviewers to manually review online formularies in a condensed timeline. The purpose of this proposal is to develop a new process, which will automate as much of the review as possible, using existing language models as well as large language models (LLM) to introduce efficiencies and increase data accuracy. | A Python-based automation pipeline will download PDFs and extract required Rx information in appropriate format for review | automate as much of the review as possible, and increase data accuracy | automate as much of the review as possible, and increase data accuracy | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | RA and RADV Predictive Modeling and Methodology Evaluation | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Marketplace) Risk adjustment data validation (RADV) is the audit of HHS-operated Risk Adjustment (RA). KPMG, the main RADV contractor, seeks to conduct predictive modeling and simulations of proposed policy changes. KPMG also validates the RA models (ML-based) and evaluates the RADV methodology. | Predictive modeling and simulations of policy changes are used to determine likelihood of various outcomes across markets, by individual issuer, etc. Model validation and methodology evaluation determine effectiveness, fairness, and impacts of the programs. | Better decision-making ability for proposed policy changes. Ensure integrity and validity of RA models, which are complex ML-based models. | Better decision-making ability for proposed policy changes. Ensure integrity and validity of RA models, which are complex ML-based models. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | RADV Medical Record Review | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Marketplace) RADV entails the review of enrollees' medical records to determine if diagnoses submitted to CCIIO for purposes of calculating RA transfers actually exist. This medical record review is time- and resource-intensive for KPMG (and therefore CCIIO). | Improve efficiency of medical record review by flagging diagnoses for KPMG coders to review. | Reduce time (and therefore costs) spent on RADV medical record review | Reduce time (and therefore costs) spent on RADV medical record review | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Issuer Expansion/Market Entry Prediction | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | CCIIO FO conducts targeted outreach to issuers likely to enter the markets or expand to other states | Predictive models using AI/ML and over 300 signals derived from data determine the likelihood of expansion or new entry into the markets in the future by issuer or parent-company | Better decision-making ability and strategy for CCIIO to conduct outreach to market entrants/expanders | Better decision-making ability and strategy for CCIIO to conduct outreach to market entrants/expanders | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | SBC Content Review | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Marketplace) The SBC Content Review identifies cost-sharing discrepancies between the Plans and Benefits Template and SBC Form. If there are discrepancies in the numerical values input in the two data sources, reviewers must manually review the "Limitations, Exceptions and Other Information" in the SBC Form to assess if a true discrepancy exists. The purpose of the proposal is to integrate large language models (LLM) to programmatically review qualitative data in the SBC Form. | Improve efficiency of SBC Form review. | Reduce time in identifying true discrepancies. | Reduce time in identifying true discrepancies. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | MFT UI Chatbot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Hub/MFT team manually supports operations related to onboarding, Q/A, file tracking and technical inquiries. ? Monthly Average of inquiries: ? ~800 Email inquires ? ~40 Jira/SNOW Tickets? The Communication is manual via emails, Zoom Meetings, and Slack. This Current process is time-consuming, adds and degrades stakeholder experience.? ? Typical operational hours are Monday through Friday, 8 a.m. to 6 p.m. ET (this doesnt serve Mountain Time, West Coast, Hawaii, and Alaska), followed by XOC escalations. This means non-prod inquiries will be responded to the following business day. ?? We have limited, Quality of Service (QoS) metrics to accurately assess consumer satisfaction at present time. | Implement a chatbot to provide real-time file status updates for external users and tools for internal teams to generate reports and visualize key metrics. | Real-Time File Status includes Detailed Processing Information? FAQs Ability to chat with Live-Agent Survey Form Historical File Tracking Limited CMS Stakeholder Testing Develop Quality of Service (QoS) metrics Self - Learning Performance and Alerting? Rollout to Alpha Partners and Issuers User/Alpha Partners Feedback | 25/08/2026 | b) Developed in-house | Yes | Real-Time File Status includes Detailed Processing Information? FAQs Ability to chat with Live-Agent Survey Form Historical File Tracking Limited CMS Stakeholder Testing Develop Quality of Service (QoS) metrics Self - Learning Performance and Alerting? Rollout to Alpha Partners and Issuers User/Alpha Partners Feedback | Data are statuses of files transfers held in the EFT PostgreSQL database tables | No | PIA not publicly available | k) None of the above | Yes | PIA not publicly available | |||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | MLMS Upgrade MILA from CARLA to Druid | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | New chatbot proposed by CMS MLMS team will be integrated into the existing MLMS application. It aims to improve user experience, designed to maintain or boost deflection rates, and will be hosted within the CMS environment for smooth integration. | - Empower MLMS operations team with full control over chatbot management - Maintain or improve current deflection rate from MILA 1.0 - Reduce help desk costs through a more efficient support solution - Tailor chatbot content to specific MLMS support requirements - Deliver personalized user interactions to enhance experience - Ensure smooth escalation to Tier 1 support when needed | Improve user experience and maintain or boost deflection rates | Improve user experience and maintain or boost deflection rates | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Interoperability URL Review Automation | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Enable the team to save time and man hours, cost savings | Improve timeliness, Minimize review inconsistencies, Improve quality | Verifying the submitted Interoperability URLs are active Checking that hyperlinks within the Interoperability webpages are active Reviewing URL Content for required standards and language Entering review results into the Interoperability Review Round Workbook Review of URLs submitted for Question 3 that provide conformant technical documentation for the Patient Access API Interoperability URL entry into the Interoperability Review Round Workbook Technical review of a selected subset of the applications Transferring review results from the Interoperability Review Round Workbook to MPMS All Interoperability Justification Forms will be reviewed manually | Verifying the submitted Interoperability URLs are active Checking that hyperlinks within the Interoperability webpages are active Reviewing URL Content for required standards and language Entering review results into the Interoperability Review Round Workbook Review of URLs submitted for Question 3 that provide conformant technical documentation for the Patient Access API Interoperability URL entry into the Interoperability Review Round Workbook Technical review of a selected subset of the applications Transferring review results from the Interoperability Review Round Workbook to MPMS All Interoperability Justification Forms will be reviewed manually | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | C3PO: CMS Comprehensive Cybersecurity and Privacy Optimization | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | Reduce the burden of reviewing lengthy OMB, NIST and HHS policy documents, quicker review of SORN and PIAs | Reducing the response time to new or emerging policies and technologies, the ability to adequately review privacy agreements in the event of incident | Recommendations on policy updates, identification of privacy agreements and the associated systems. | 24/10/2026 | c) Developed with both contracting and in-house resources | Connsci and OpenAI | No | Recommendations on policy updates, identification of privacy agreements and the associated systems. | Publicly available OMB memos, NIST Guidance, CMS policies, CMS SORNs and PIAs | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Integrated Data Repository (IDR) Customer Analytic Environment (CAE) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Provides an AI/ML environment connected to the IDR that enables CAE customers to focus on their AI use cases and business outcomes, without the burden of setting up or maintaining their own IT infrastructure or AI/ML labs or environments. | The CAE provides CMS Centers and Offices with secure, fully managed AI/ML workspaces directly integrated with the IDR, eliminating the need to build or maintain separate infrastructure. This accelerates AI innovation, shortens time-to-insight, and enables scalable adoption of advanced analytics across the agency. | Within the CAE, AI systems produce outputs such as predictions (e.g., forecasting trends, detecting anomalies), recommendations (e.g., suggested actions or risk mitigation strategies), classifications (e.g., grouping records or identifying patterns), and other decision-support insights. These outputs are generated from IDR data within a secure, fully managed environment and are designed to inform and augment human decision-making across CMS programs, not to operate autonomously. | 25/07/2026 | c) Developed with both contracting and in-house resources | GDIT | Yes | Within the CAE, AI systems produce outputs such as predictions (e.g., forecasting trends, detecting anomalies), recommendations (e.g., suggested actions or risk mitigation strategies), classifications (e.g., grouping records or identifying patterns), and other decision-support insights. These outputs are generated from IDR data within a secure, fully managed environment and are designed to inform and augment human decision-making across CMS programs, not to operate autonomously. | CAE is an environment where customers will build their ML models and AI use cases, by leveraging the IDR | Yes | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | IDR Support Bot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | The IDR Support Bot is an AI-powered chatbot that makes it easier for CMS users to get the IDR information they need, when they need it. It answers questions about IDR terminology, tools and services, onboarding steps, available data, training content, key contacts, and morewithout the hassle of submitting a support request or digging through pages of documentation. | Any CMS user with an active ID can use the IDR Support Bot to gain a foundational understanding of the IDR, its data offerings, and how to navigate its resources. The IDR Support Bot leverages retrieval-augmented generation (RAG) technology to provide accurate and contextually relevant responses to user queries. By combining a structured knowledge base with real-time document retrieval, IDR Support Bot ensures that users receive up-to-date and comprehensive information about the IDR. | The IDR Support Bot leverages retrieval-augmented generation (RAG) technology to provide accurate and contextually relevant responses to user queries. | 25/07/2026 | c) Developed with both contracting and in-house resources | GDIT | Yes | The IDR Support Bot leverages retrieval-augmented generation (RAG) technology to provide accurate and contextually relevant responses to user queries. | General information about the IDR, such as terminology, tools and services, onboarding steps, available data, training content, key contacts, and more. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Eligibility & Enrollment Medicare Online (ELMO) Chatbot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | Assist Medicare caseworkers in navigating a fairly complex application | More efficient caseworkers | Answers questions about the ELMO tool, how to navigate it, and where to find information | 25/07/2026 | c) Developed with both contracting and in-house resources | Peraton, Inc. | Yes | Answers questions about the ELMO tool, how to navigate it, and where to find information | The model was trained using CMS general information and ELMO tool documentation | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Production Operations Anomaly Analysis | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identify potential production issues proactively and refer for human analysis | Early identification of issues and improved response time, ensure accuracy of bills | Flags anomalies that stray far from its predictions | 25/06/2026 | a) Purchased from a vendor | Yes | Flags anomalies that stray far from its predictions | Production Operational data, high level billing data, state and agency summarized billing data, beneficiary level billing data in the future (no PII/PHI) | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Case Management Tool Case Creation & Automation | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Agentic AI | Automate and optimize manual case creation for incoming documents/requests and simplify manual tasks | Improved case completion volume, higher caseworker productivity | Creates a case in the system based on a scanned document from the public, automates simple functions, etc. | 25/07/2026 | a) Purchased from a vendor | Yes | Creates a case in the system based on a scanned document from the public, automates simple functions, etc. | No | k) None of the above | Yes | ||||||||||||||||
| Department Of Health And Human Services | HHS/DAB/OS | AI to Improve Public Access to the Administrative Appeals Process | a) Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | More effectively leverage online resources to better assist the public and reduce the number of misfiled appeals submitted to the Departmental Appeals Board (DAB) via electronic filing. | DAB adjudicatory divisions rely heavily on electronically filed (E-filed) appeals. Most appellants access E-filing through the DAB's website. A chatbot on the website will reduce filing errors and improve customer experience by directing appellants to the correct DAB adjudicatory division responsible for deciding their appeal. | Reliable data on appellant filing activity and meta data the agency will use to analyze workloads and allocate resources. | Reliable data on appellant filing activity and meta data the agency will use to analyze workloads and allocate resources. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/DAB/OS | AI Use Policy Tool | a) Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Ensure public confidence in the integrity of DAB decisions by maximizing quality review standards and reducing errors. | The DAB is responsible for issuing fair, impartial, legally correct and defensible determinations which serve as the final decision of the DHHS Secretary. The AI quality review large language model (LLM) tool will use an LLM to create algorithms that run behind our case tracking system to randomly select certain DAB decisions to identify potential quality review issues. The LLM will scan DAB decisions to ensure compliance with quality review standards (e.g., protect PHI, PII and FTI). Benefits include more effective quality review and faster identification of data trends that may require additional analysis. | Analysis of large volumes of data that can be used to address common errors and supports the development of targeted training and job aid. | Analysis of large volumes of data that can be used to address common errors and supports the development of targeted training and job aid. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/DAB/OS | Resources to Assist the Advisory Board In Identifying AI Tools for Use In An Adjudication Environment | a) Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Agency adjudication activities require the analysis of large quantities of data to conduct docket analysis, identify efficiencies in case processing, and conduct 508 compliance required to make decisions available to the public. | Use AI to analyze data received from appellants and interested parties to identify trends, increase efficiency in case processing and improve adjudication outcomes. | Enhanced workload data that can be used to allocate resources to the DAB's various adjudicatory divisions. | Enhanced workload data that can be used to allocate resources to the DAB's various adjudicatory divisions. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: 356H Machine Learning (ML) Facility Supply Chain Role Classification Previously: 356H ML Facility Supply Chain Role Classification | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Manual classification of facility supply chain roles from FDA industry submissions (356h forms) is time-consuming and inefficient | Improve the efficiency of evaluating 356H forms to assess a facility's supply chain role, thereby decreasing the time required for the processing of submissions and enabling faster oversight of drug manufacturing facilities. | Extracts a facility's supply chain role from industry submissions and displays identified roles on a data-stewards screen for human verification and final determination. | 23/05/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Extracts a facility's supply chain role from industry submissions and displays identified roles on a data-stewards screen for human verification and final determination. | Applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: Risk-based FAR Review & Decision Support Previously: Field Alert Reports (FAR) Prioritization Model | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Manual and subjective assessment process for Field Alert Reports (FARs) can lead to inconsistency in prioritizing and assessing reports, potentially leading to ineffective resource allocation where the same level of formality is being applied for all issues, regardless of risk. | Assist Field Alert Report (FAR) reviewers by providing objective intelligence and insights that help prioritize the highest risk reports, potentially reducing response time to high-risk issues while maintaining human oversight of all decisions and leading to Agency resources being used more efficiently. | AI-based machine learning classification of FAR risk into low, medium, and high; provides insights on problem clusters, rare-events, and source variables. | 24/10/2026 | c) Developed with both contracting and in-house resources | In-house (CDER staff) and contractor teams | Yes | AI-based machine learning classification of FAR risk into low, medium, and high; provides insights on problem clusters, rare-events, and source variables. | Internal Field Alert Reports data from LSMV | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Analytics-Driven Supplement Evaluation (ASE) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Exponential increase in post-approval chemistry, manufacturing, and controls (CMC) change submissions, with 80% being Changes Being Effected (CBE-30/0) notifications that may be suitable for systematic analytics-driven evaluation. | This AI use case supports the triage and staff assignment process for the review of post-market Change Being Effected (CBE) supplement submissions, improving review efficiency and consistency while ensuring appropriate regulatory oversight. | Using a Convolutional Neural Network (NN) model, in combination with a rules-based approach, produces an output that helps staff triage CBE submission review | 24/03/2026 | c) Developed with both contracting and in-house resources | In-house (CDER staff) | Yes | Using a Convolutional Neural Network (NN) model, in combination with a rules-based approach, produces an output that helps staff triage CBE submission review | Data submitted in applicants' supplemental submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: FAR-based Facility Signal Detection Tool Previously: Post-market Surveillance Reports Signal Detection and Cluster Analysis. | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Need for proactive detection of quality signals in post-market surveillance reports using statistical process control and topic modeling to identify potential drug quality hazards and mitigate the associated risks. | Identifies proactive quality signals and their associated problem clusters in an objective manner for triage and human review, helping prioritize resources on the higher risk issues and more complex problems. | Identifies problem clusters for the flagged signals in an objective manner for triage/review. | 24/06/2026 | c) Developed with both contracting and in-house resources | In-house (CDER staff) | Yes | Identifies problem clusters for the flagged signals in an objective manner for triage/review. | Internal Field Alert Reports data from LSMV (FAERS tool) | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | MedWatch Dashboard | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Need to proactively identify emerging issues and clusters within MedWatch reports using advanced analytics. | Assist with consistent monitoring and identification of product risks from MedWatch reporting patterns and report content to support review staff in detecting potential safety problems that could affect patients. | Identify product risk signals from MedWatch reports by using time series analysis to flag products and topic modeling to summarize the comments. | 23/04/2026 | c) Developed with both contracting and in-house resources | In-house (CDER staff) | Yes | Identify product risk signals from MedWatch reports by using time series analysis to flag products and topic modeling to summarize the comments. | MedWatch data from LSMV and IQVIA data | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Quality Surveillance Dashboard (QSD) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Need for consistent, data-driven assessment of drug manufacturing facilities and proactive detection of potential quality signals that could indicate problems with drug safety or effectiveness. | Extracts unstructured text from documents and pools that with other available data to form a dashboard that enables consistent assessment of Center for Drug Evaluation and Research (CDER) regulated manufacturing facilities, supporting FDA's oversight of drug quality. | Identifies and extracts keywords/phrases from unstructured documents and presents sentences containing keywords/phrases in context. | 23/03/2026 | c) Developed with both contracting and in-house resources | In-house (CDER staff) | Yes | Identifies and extracts keywords/phrases from unstructured documents and presents sentences containing keywords/phrases in context. | FDA EIR documents | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Annual Report CMC | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Generative AI | FDA receives numerous annual reports from drug manufacturers containing important manufacturing and quality information, but key details can be difficult to locate quickly within lengthy documents. | Objective of this use case was to assist in extracting Chemistry, Manufacturing, and Controls (CMC) changes reported within unstructured annual report industry submission documents to help build a complete repository for downstream analysis and more efficient regulatory review. | Support information extraction from unstructured documents | 24/05/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Support information extraction from unstructured documents | Applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: Application References Previously: Application-DMF Reference | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | FDA receives extensive drug application submissions that contain valuable references to related applications, but these relationships aren't always captured in existing regulatory databases. | Extracts references to Drug Master Files (DMFs) from marketing application submissions including Abbreviated New Drug Applications (ANDA), New Drug Applications (NDA), and Biologics License Applications (BLA). These submission documents may be structured (356H form) or unstructured (electronic Common Technical Document modules 1-4). This pipeline parses content from these documents, extracts DMF references (e.g., ANDA123456 references DMF123456), and exposes the data in a structured format for analysis. | Support information extraction from unstructured documents | 23/12/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Support information extraction from unstructured documents | applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: Extracting DMF Facilities from unstructured documents Previously: DMF (Drug Master File) Facilities | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | FDA receives Drug Master File (DMF) submissions containing valuable information about manufacturing facilities, but this data exists in various document formats that require manual review to compile comprehensive facility information. | Extracts facility references from Type II (Drug Substance) DMF manufacturing submissions (i.e., DMF123456 discloses that it uses Facility X for manufacturing and Facility Y for stability testing). These DMF submissions may include structured documents (3938 form) or unstructured documents (electronic Common Technical Document module 3), enabling more comprehensive oversight of the drug supply chain. | Support information extraction from unstructured documents | 23/06/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Support information extraction from unstructured documents | Applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: Information Visualization Platform (InfoViP) to Support Analysis of FAERS safety reports Previously: Information Visualization Platform (InfoViP) to Support Analysis of adverse event reports | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is intended to support the analysis of FAERS Individual Case Study Reports in post-market safety surveillance by automating duplicate detection, creating temporal visualizations, and classifying reports by information quality. | Information Visualization Platform (InfoVIP) supports post-market surveillance using AI to assist with review and analysis of adverse event reports with advanced visualizations including temporal data and algorithms for detection of duplicate FAERS adverse event reports and classification of reports by level information quality. | Performs Natural Language Processing (NLP) and applying Machine Learning (ML) algorithm to extract data from unstructured case narratives and combine with structured data to support analysis and review of adverse event reports. | 25/08/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | Performs Natural Language Processing (NLP) and applying Machine Learning (ML) algorithm to extract data from unstructured case narratives and combine with structured data to support analysis and review of adverse event reports. | Adverse Event data in FAERS | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CBER | LLM-Assisted VAERS Analyses | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Extraction of relevant information, tabulation of data, find patterns across adverse events reports, and generate hypotheses for further investigations. | Build capacity for and assess the application of a LLM to VAERS (Vaccine Adverse Events Reporting System) to provide reviewers adhoc VAERS queries and efficiently generate customized query outputs. Efficiently generate customized query outputs of VAERS queries for reviewers. | To provide reviewers adhoc VAERS queries. | To provide reviewers adhoc VAERS queries. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: Module 3 Facilities Extraction Previously: Module 3 Faculties | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Generative AI | FDA receives drug application submissions (ANDA, NDA, BLA) that contain important manufacturing facility information in Module 3 documents, but this data requires manual review to identify and organize all facility details. | Objective of this use case was to assist in identifying and extracting all drug manufacturing facilities reported within unstructured module 3 submissions from marketing applications to build a comprehensive inventory of drug facilities for better regulatory oversight. | Support information extraction from unstructured documents | 24/02/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Support information extraction from unstructured documents | applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Packaging Materials and Suppliers | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Generative AI | FDA needs to efficiently identify which Drug Master Files (DMFs) contain specific packaging materials and understand how these materials connect to drug applications, but this information is currently difficult to locate across numerous documents. | Objective of this use case is to extract data from unstructured sources to assist in building an inventory of drug packaging materials and their suppliers to support staff in conducting drug supply chain analysis for better regulatory oversight. | Support information extraction from unstructured documents | 24/09/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Support information extraction from unstructured documents | Applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CBER | Process Large Amount of Submitted Docket Comments | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Deduplication of public comments, and generating draft sentiment analysis and grouping of comments | To enhance the automatic process of dockets, we have created an AI/ML tool in CBER/HIVE that automatically download dockets and process them to accelerate the review of docket comments, significantly improving the efficiency and accuracy of our regulatory processes. Efficiently generate customized query outputs of VAERS queries for reviewers. | To provide reviewers adhoc VAERS queries. | 23/06/2026 | b) Developed in-house | Yes | To provide reviewers adhoc VAERS queries. | Public comments on various FDA dockets | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Real World Data/Evidence | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Generative AI | FDA receives numerous submissions containing real-world data and evidence, but identifying and cataloging these studies across various submission types is time-intensive and critical for regulatory reporting requirements under PDUFA (Prescription Drug User Fee Act). | Assist in identifying industry unstructured submissions containing Real World Data/Evidence (RWD/E) by analyzing parsed content for likely indicators to support congressional reporting and regulatory decision-making. | Supports extracting text from unstructured documents and tagging for documents containing Real World Evidence and Real World Data | 24/10/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Supports extracting text from unstructured documents and tagging for documents containing Real World Evidence and Real World Data | applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Regulatory Starting Material | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Generative AI | FDA needs comprehensive visibility into the upstream supply chain for drug manufacturing, particularly tracking regulatory starting materials and their suppliers across approved and pending drug applications to better understand potential supply chain vulnerabilities. | Assists in extracting Regulatory Starting Materials (RSMs) and their suppliers from unstructured module 3 industry submissions to help create an inventory that will illuminate the upstream supply chain and help FDA identify potential supply chain vulnerabilities. | Support information extraction from unstructured documents | 23/08/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Support information extraction from unstructured documents | applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Resource Capacity Planning | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | FDA needs to accurately predict the volume and complexity of incoming drug application submissions to ensure appropriate staffing and resources are available for timely reviews under the user fee program | Forecasting human drug review program submissions and corresponding FDA workload to support better resource planning and ensure timely review of drug applications that benefit public health. | Forecasts workload submissions across major user fee programs to help support fee setting for the human drug review programs | 20/08/2026 | c) Developed with both contracting and in-house resources | In-house (CDER staff) | Yes | Forecasts workload submissions across major user fee programs to help support fee setting for the human drug review programs | FDA systems including DARRTS and Panorama | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDRH | Supply Chain Resilience Program, Office of Supply Chain Resilience (OSCR) - Foresight | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Estimating potential future demand of medical devices | Forecasting demand of medical devices and supplies. Forecast demand for critical devices during a variety of scenarios (e.g. natural disaster, PHE) | Aids in forecasting demand for critical devices under a variety of scenarios. | 23/04/2026 | b) Developed in-house | Yes | Aids in forecasting demand for critical devices under a variety of scenarios. | Premier transaction data of healthcare facility purchases | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CBER | data can not be publicly disclosed as is proprietary information from submissions | HIVE AI Pilot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to assist in solving several problems related to the review process for INDs. Specifically, it aims to address issues of inefficiency, delays in identifying deficiencies and information overload. By providing recommendations for review disciplines, it helps reviewers to quickly identify potential review disciplines that are required. By identifying grossly deficient submissions early on, it reduces the workload and highlighting key data helps reviewers to focus on higher-level tasks that require their expertise. | Overall, is designed to improve efficiency and effectiveness of the regulatory review process, allowing for quicker and well-informed decision making | The system's output include: 1. review discipline recommendations - automated suggestions for the most appropriate review disciplines for each incoming submission. 2. highlighted key data - generates reports highlighting critical information to facilitate quicker understanding by RPMs and reviewers. 3. Summaries - reports summarizing large documents to potentially accelerate review. | 25/07/2026 | c) Developed with both contracting and in-house resources | SAIC | Yes | The system's output include: 1. review discipline recommendations - automated suggestions for the most appropriate review disciplines for each incoming submission. 2. highlighted key data - generates reports highlighting critical information to facilitate quicker understanding by RPMs and reviewers. 3. Summaries - reports summarizing large documents to potentially accelerate review. | use a dataset of previously submitted IND applications, which provided a comprehensive understanding of the types of data and information included in these submission. Feedback and annotations from experienced RPMs and reviewers on a subset of the historical submission, which helps fine-tune the model's understanding of what constitutes a high-quality submission. | data can not be publicly disclosed as is proprietary information from submissions | Yes | not publicly available | k) None of the above | Yes | the code is not open source, it resides within FDA gitlab repository and is not publicly available | not publicly available | |||||||||
| Department Of Health And Human Services | HHS/FDA/CBER | AI and Vaccine Labeling | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The app is designed to streamline vaccine label review processes, offering several key features to simplify and improve efficiency. It includes MedDRA integration for term searches, comprehensive search across all vaccine documents, and vaccine-specific information retrieval. Additionally, the app provides lookup functionality for approval timelines and active ingredients, as well as tools for detecting duplicates and comparing section content using AI-enhanced technology. | Enhance the vaccine label review process, making it more efficient and effective. | The AI system's output is a list of similarities and differences between vaccine label sections, as well as highlighted changes or updates to vaccine labels. Additionally, it may identify duplicate or similar vaccines and provide recommendations for label revisions or updates. The system generates summarized information about vaccine ingredients, approval timelines, and other relevant details. These outputs would be presented in a user-friendly format, such as tables, charts, or highlighted text, to facilitate easy review and analysis by the user. | The AI system's output is a list of similarities and differences between vaccine label sections, as well as highlighted changes or updates to vaccine labels. Additionally, it may identify duplicate or similar vaccines and provide recommendations for label revisions or updates. The system generates summarized information about vaccine ingredients, approval timelines, and other relevant details. These outputs would be presented in a user-friendly format, such as tables, charts, or highlighted text, to facilitate easy review and analysis by the user. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | https://saritaingest.fda.gov/CDER_Publications_System.html | CDER Publications | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Inefficient manual curation and categorization of publications by CDER authors. It aims to automate the process of organizing publications by focus areas for data call responses and identifying regulatory publications, replacing time-consuming manual review processes. | CDER Pubs is needed for the Science and Research Investments Tracking Archive (SARITA) for reporting of outcomes and to support the prioritization, management, and review of the quality and impact of CDER's science and research investments, helping ensure public accountability for research activities. | Accuracy of AI/ML data curation of publications feed, ability to categorize publications as regulatory or not regulatory, and ability to classify publications | 23/06/2026 | c) Developed with both contracting and in-house resources | NCTR | Yes | Accuracy of AI/ML data curation of publications feed, ability to categorize publications as regulatory or not regulatory, and ability to classify publications | CDER staff research citations from PubMed | https://saritaingest.fda.gov/CDER_Publications_System.html | No | k) None of the above | Yes | ||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | CDER Regulatory Science Research (RSR) Projects AI for Process Control in Advanced Manufacturing | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This project explores the use of AI in advanced pharmaceutical manufacturing as part of an exploratory R&D effort focused on model predictive control strategies. The AI components are used solely in a research context to improve understanding of AI-enabled control systems and inform future regulatory readiness. The project does not involve operational use, decision-making, or direct impact on the public or regulated entities. Therefore, it does not meet the definition of a high-impact AI use case under OMB M-25-21. | Classical/Predictive Machine Learning | Need for better process control in continuous manufacturing and development of soft sensors for real-time release testing strategies. | The outcomes of this work can be used to gain a better understanding of AI in advanced pharmaceutical manufacturing control, identify the associated risks, and help review future submissions involving this technology, ultimately supporting more efficient and reliable drug manufacturing. | The AI model demonstrated remarkable performance in setpoint tracking and disturbance rejection for a digital continuous manufacturing line, underscoring the potential of AI-based control strategies in enhancing product quality and regulatory assessment. | 24/07/2026 | b) Developed in-house | No | The AI model demonstrated remarkable performance in setpoint tracking and disturbance rejection for a digital continuous manufacturing line, underscoring the potential of AI-based control strategies in enhancing product quality and regulatory assessment. | Data was generated using a digital twin of a manufacturing plant developed in-house | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Creating a development network | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is intended to solve the problem of inconsistent data formats and inefficient access to unstructured clinical data across multiple healthcare sites. It aims to standardize EHR and claims data conversion into the Sentinel Common Data Model and develop processes for storing and extracting metadata from free text notes to enable timely execution of future Sentinel surveillance tasks. | Methods project applying Natural Language Processing (NLP) to extract data from clinical notes to use in pharmacoepidemiology studies, improving FDA's ability to monitor drug safety using real-world healthcare data. | Creates a network of organizations that can support development of algorithms and use of AI tools such as Natural Language Processing (NLP). | 22/12/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | Creates a network of organizations that can support development of algorithms and use of AI tools such as Natural Language Processing (NLP). | Electronic Health Record (EHR) | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Developing an Objective and Quantitative Endpoint for Atopic Dermatitis in Pediatric and Adult Populations | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The purpose of this study is to validate the Emerald technology to see if it accurately detects the motion of an individual scratching. The act of scratching would not be considered to have a significant effect on human health and safety, which means this study is not a high-impact AI use case based on the definition in OMB M-25-21. | Classical/Predictive Machine Learning | Intended to solve the problem of lacking objective, quantitative methods to assess nocturnal scratching in children with atopic dermatitis. It aims to create a digital endpoint that can accurately measure scratching behavior to evaluate the efficacy and performance of FDA-regulated treatments for atopic dermatitis and pruritus, addressing an unmet need in clinical assessment. | To advance novel endpoints in drug development, potentially leading to better ways to measure treatment effectiveness for skin conditions affecting children and adults. | A digital endpoint that can accurately measure scratching behavior to evaluate the efficacy and performance of FDA-regulated treatments for atopic dermatitis and pruritus, addressing an unmet need in clinical assessment. | 25/01/2026 | a) Purchased from a vendor | Emerald Innovations | No | A digital endpoint that can accurately measure scratching behavior to evaluate the efficacy and performance of FDA-regulated treatments for atopic dermatitis and pruritus, addressing an unmet need in clinical assessment. | The validation data is being collected as part of the study. | No | k) None of the above | Yes | |||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Drug Shortage Predictive Model | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Drug shortages have increased significantly since 2017 and worsened during COVID-19, creating critical gaps in patient access to essential medications. FDA seeks to develop predictive capabilities to anticipate shortages before they occur. | Help with prevention and mitigation of drug shortages by signaling early risks to a supply chain, potentially ensuring patients maintain access to essential medications. | Prediction of supply events for all CDER regulated application products in the next 12 months | 24/05/2026 | b) Developed in-house | Yes | Prediction of supply events for all CDER regulated application products in the next 12 months | CDER data on submissions, drug shortages, compliance, reviews. External data on sales dollars and volume and product information. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Category Subcategory Classification - Safety Reports Bot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Other | Manual analysis and data entry of safety report submissions is time-intensive and requires staff to review scanned PDFs and determine appropriate categories. | Potentially reduces manual labor in processing safety report submissions, allowing FDA staff to focus on safety analysis and regulatory decision-making rather than data entry tasks. | To predict cat/subcat from IND submissions through the rule setup. | 24/05/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | To predict cat/subcat from IND submissions through the rule setup. | Applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Data Extraction from IND Safety Reports using OCR/AI Technologies | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Manual extraction of data from IND safety reports is labor-intensive and time-consuming for regulatory staff. | Expedited processing of Investigational New Drug (IND) Safety Reports received by the Agency, enabling more rapid regulatory action in response to reported adverse events and better protection of clinical trial participants. | Using ThinkTrends, a COTS tool, extracted data is converted into E2B(R2) format for automatic ingestion into FAERS LSMV | 25/03/2026 | a) Purchased from a vendor | ThinkTrends | Yes | Using ThinkTrends, a COTS tool, extracted data is converted into E2B(R2) format for automatic ingestion into FAERS LSMV | MedWatch 3500 forms and intake system processing logic | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | CDER Style Guide | AI Editing Tool | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | This AI model is designed to detect style and formatting inconsistencies in FDA draft documents by comparing them against the FDA CDER Style Guide standards. It solves the problem of manual quality control inefficiencies and ensures consistent adherence to established documentation standards across all FDA CDER publications. | It significantly reduces the workload for FDA editors by identifying style and formatting issues; however, human review remains essential, especially for important documents, ensuring both efficiency and quality in FDA communications. | AI-identified style and formatting issues | 25/03/2026 | b) Developed in-house | No | AI-identified style and formatting issues | CDER Style Guide | CDER Style Guide | No | k) None of the above | No | |||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Use case package 1: Empirical application of the Sentinel EHR and claims Data Partner network to address ARIA insufficient inferential requests (UC1) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This AI project is designed to solve the problem of determining whether available data sources and analytical methods are suitable for specific pharmacoepidemiologic research questions. It aims to systematically evaluate data fitness-for-purpose and identify viable use cases where protocol-based studies can reliably assess drug safety and effectiveness in real-world populations. | Improved capture of unstructured Electronic Health Record (EHR) data for drug safety studies, enabling FDA to better assess medication safety and effectiveness using real-world healthcare information. | Natural Language Processing (NLP) for Electronic Health Record (EHR) unstructured data extraction | 23/09/2026 | c) Developed with both contracting and in-house resources | In-house (CDER staff) and contractor teams | Yes | Natural Language Processing (NLP) for Electronic Health Record (EHR) unstructured data extraction | Claims-Electronic Health Record (EHR) | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | CI5: Development and refinement of toolkits for routine use in the EHR and claims Data Partner network | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This project is designed to solve the problem of inconsistent or inefficient data analysis capabilities across the EHR and claims Data Partner network. It aims to create standardized, reliable analytical tools that can be routinely deployed across different data partners to improve the consistency, quality, and efficiency of pharmacoepidemiologic analyses within the Sentinel System. | improved confounding control when using Electronic Health Record (EHR) data for drug safety studies, leading to more reliable conclusions about medication risks and benefits in real-world populations. | Regularized machine learning tools (e.g., Least Absolute Shrinkage and Selection Operator (LASSO)-based models) combined with targeted learning methods for improved large-scale covariate adjustment | 23/09/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | Regularized machine learning tools (e.g., Least Absolute Shrinkage and Selection Operator (LASSO)-based models) combined with targeted learning methods for improved large-scale covariate adjustment | Electronic Health Record (EHR) Data Elements | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Use case package 2 (UC2): Empirical application of the Sentinel EHR and claims Data Partner network to enhance ARIA insufficient inferential requests and atypical descriptive requests | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This AI project is designed to solve the problem of translating theoretical innovative methods into practical, real-world applications within the Innovation Center (IC) development network. It aims to create concrete, evidence-based examples that demonstrate how new technologies and approaches can be effectively implemented to address specific pharmacoepidemiologic and drug safety surveillance challenges. | Developing advanced methods including machine learning to address incomplete information in drug safety studies, validating health outcome algorithms using Natural Language Processing (NLP)-assisted chart review, and applying NLP to analyze cannabis-derived product exposures in electronic health records, ultimately improving FDA's drug safety surveillance capabilities. | EHR data elements; EHR-linked to claims data | 23/09/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | EHR data elements; EHR-linked to claims data | EHR data elements; EHR-linked to claims data | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | FE5: Incorporate a range of frequently used engineering features from EHRs into the Sentinel common data model in the Sentinel EHR and claims linked Data Partner network | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This AI project is designed to solve the problem of extracting valuable clinical information trapped in unstructured free-text fields within electronic health records. It aims to create a systematic feature engineering approach that can convert narrative clinical notes and text data into structured, analyzable formats for pharmacoepidemiologic research and drug safety surveillance. | Supports the use of Natural Language Processing (NLP) to extract information on five specific medical concepts from Electronic Health Record (EHR) data and make available in the Sentinel Common Data Model for future drug safety studies, enhancing FDA's surveillance capabilities. | NLP for EHR unstructured data extraction | 23/09/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | NLP for EHR unstructured data extraction | Free-text data from the commercial and development network EHR-claims | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Develop an empirical algorithm to automate negative control identification in Sentinel System using the Data-driven Automated Negative Control Estimation (DANCE) algorithm | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This AI project is designed to solve the problem of optimizing the Data-driven Automated Negative Control Estimation (DANCE) algorithm for real-world implementation in large electronic healthcare database studies. It aims to use plasmode simulation to refine the algorithm's performance and then validate the tailored approach through a multisite test case focused on safety endpoint detection, ensuring the method works effectively across different healthcare data environments. | Supports the use of plasmode simulation to evaluate and tailor implementation of DANCE in settings relevant to large electronic healthcare database studies and to apply the tailored DANCE algorithm to a test case incorporating a safety endpoint in a multisite implementation, improving FDA's ability to detect drug safety signals. | Electronic Health Record (EHR) data | 24/03/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | Electronic Health Record (EHR) data | Electronic Health Record (EHR) | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Support tools that can be used in conjunction with Electronic Health Record (EHR) data, such as machine learning and natural language processing (NLP), and the use of Artificial Intelligence (AI) chart review tools | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This AI project is designed to solve the problem of rapidly responding to urgent or emerging drug safety signals that require immediate attention and coordinated action. It aims to leverage key expertise and resources at the Sentinel Operations Center to quickly address time-sensitive safety concerns that may pose risks to public health. | Supports using AI tools to help with medical chart review for emerging safety needs, enabling faster response to urgent drug safety concerns that require immediate attention to protect public health. | Electronic Health Record (EHR) data | 24/10/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | Electronic Health Record (EHR) data | Electronic Health Record (EHR) | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Sentinel System Task Order to address an Emerging Safety Need | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This AI project is designed to solve the problem of efficiently validating emerging safety signals through chart review when traditional structured data is insufficient. It aims to use NLP-supported tools to extract and analyze information from unstructured clinical notes, enabling faster and more comprehensive chart abstraction and adjudication processes for urgent safety investigations. | This allows FDA to apply Natural Language Processing (NLP) capabilities to extract data from Electronic Health Records (EHRs) as needed to address regulatory gaps around emerging safety needs, enabling faster response to potential drug safety concerns. | Claims and EHR data | 24/04/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | Claims and EHR data | Claims and Electronic Health Record (EHR) | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | FOIA REDACTION (FRED) TOOL | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | FRED should not be considered a high-impact AI use case because: a. It operates as a recommendation system only, with mandatory human review and approval required for all outputs. b. No automated decisions are made - humans retain full control over all redaction decisions. c. It serves as an assistive tool to improve efficiency while maintaining human oversight and accountability for all FOIA compliance decisions. | Generative AI | FRED is designed to support FOIA staff in redacting records more efficiently and consistently. FOIA redaction can be a time-consuming process and experience large back-logs of requested documents. It aims to use AI to analyze, identify, and generate predictions of text for redaction, thereby improving the efficiency of FOIA response processing. | More efficient releases of documents in response to FOIA requests, reduction in back-logs, helping ensure public access to government information while protecting sensitive data appropriately. | FRED produces a PDF with boxes around text that it recommends for redaction along with comments for the redaction code | 25/05/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | FRED produces a PDF with boxes around text that it recommends for redaction along with comments for the redaction code | Completed 483 forms before redaction and versions of those forms after redaction by FDA staff | Yes | k) None of the above | Yes | |||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | AI for Regulatory Review (AIRR) Pilot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | The AI-assisted regulatory review paradigm addresses the inefficiency and administrative burden of manually searching through vast amounts of disconnected sponsor-submitted and FDA-generated documents by integrating three components: AI-powered prompt engineering for streamlined workflows, real-time regulatory data retrieval systems, and automated document formatting capabilities that maintain human oversight while significantly enhancing review efficiency and consistency. | Enables faster and more efficient regulatory reviews by reducing time spent on document searching and information gathering, allowing FDA reviewers to focus on scientific analysis and decision-making. Maintains high review quality while improving consistency across review teams and reducing administrative burden on expert reviewers. | Preliminary administrative or review summary (available in Word format) for reviewers' verification and refinement | 25/05/2026 | c) Developed with both contracting and in-house resources | In-house (CDER staff) and contractor teams | Yes | Preliminary administrative or review summary (available in Word format) for reviewers' verification and refinement | No FDA data were used to train, fine-tune or evaluate | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | AI-assisted Platform for Clinical Pharmacology Review | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI solution addresses inefficiency in the clinical pharmacology review process by reducing time reviewers spend on routine tasks, allowing them to focus their expertise on complex scientific analysis and decision-making that truly requires their specialized knowledge. | The AI integration is expected to enhance agency efficiency by optimizing reviewer time allocation, allowing them to focus on high-value tasks requiring specialized expertise. This leads to improved productivity, reduced delays, and enhanced overall performance. For the public, this translates to more timely and higher quality regulatory reviews of new medications. | AI assisted answers to list of tasks in selected task groups for clinical pharmacology review, including supporting information helping reviewers to identify the source of information from original documents. | AI assisted answers to list of tasks in selected task groups for clinical pharmacology review, including supporting information helping reviewers to identify the source of information from original documents. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | CoreDF | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Other | Current nonclinical review processes require manual extraction and analysis of sponsor findings from lengthy PDF study reports, creating inefficiencies in data quality assessment and regulatory timeline. | Expedites nonclinical review by extracting and organizing key safety findings from study reports, allowing FDA reviewers to focus on scientific evaluation and safety assessment rather than manual data extraction. | Sponsor findings from non-clinical study reports | 25/05/2026 | c) Developed with both contracting and in-house resources | IBM | Yes | Sponsor findings from non-clinical study reports | non-clinical study reports (PDFs) | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | AI for Bioanalytical Study Risk Assessment and Inspection Readiness | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | Challenges in assessing large amounts of analytical/bioanalytical study information for risk assessment and inspection preparation in a short period of time | Efficient and thorough review of bioanalytical portions of pivotal studies, enabling risk assessors and reviewers to identify and address potential issues. This benefit promotes public health by ensuring the welfare of study subjects, and helping the office verify the quality, study integrity, and regulatory compliance of Bioavailability/Bioequivalence (BA/BE) studies supporting CDER-regulated drugs. | Summary including reanalysis, deviations from method SOPs or protocols, inconsistencies or gaps in data reporting, deviations from data acceptance criteria, and deviations from the method validation; description of potential impact on study outcome. Outputs are verified by FDA staff. | 25/06/2026 | b) Developed in-house | Yes | Summary including reanalysis, deviations from method SOPs or protocols, inconsistencies or gaps in data reporting, deviations from data acceptance criteria, and deviations from the method validation; description of potential impact on study outcome. Outputs are verified by FDA staff. | No FDA data were used to train, fine-tune or evaluate | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | AI for Clinical Study Risk Assessment and Inspection Preparation | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | Challenges in assessing large amounts of clinical study information for risk assessment and inspection planning in a short period of time | Efficient and thorough review of clinical portions of pivotal studies, enabling risk assessors and reviewers to identify and address potential issues. This benefit promotes public health by protecting study subjects, and by helping the office verify the quality, study integrity, and regulatory compliance of Bioavailability/Bioequivalence (BA/BE) studies supporting CDER-regulated drugs. | Summary including any inconsistencies, discrepancies, missing information, protocol deviations, unforeseen circumstances, unexpected adverse events, severe or serious adverse events, and modifications to processes or procedures; description of potential impact on study outcome. Outputs are verified by FDA staff. | 25/06/2026 | b) Developed in-house | Yes | Summary including any inconsistencies, discrepancies, missing information, protocol deviations, unforeseen circumstances, unexpected adverse events, severe or serious adverse events, and modifications to processes or procedures; description of potential impact on study outcome. Outputs are verified by FDA staff. | No FDA data were used to train, fine-tune or evaluate | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | AI for Assessing Bioanalytical Study Conduct Alignment with Guidance and Method SOPs | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | Cross-comparing large amounts of bioanalytical data (reports, validation, tabulated data) with the M10 guidance and aligned method Standard Operating Procedures (SOPs) to identify all possible study conduct issues. | Expedited identification of bioanalytical study conduct issues through efficient cross-comparisons between the M10 guidance, method Standard Operating Procedures (SOPs), and bioanalytical study reports/data before and during inspections, helping ensure data quality and regulatory compliance. | Organized summary of bioanalytical study conduct deviations from M10 principles, regulations, and method SOPs, followed by a summary of potential study impact. Outputs are verified by FDA staff. | 25/01/2026 | b) Developed in-house | Yes | Organized summary of bioanalytical study conduct deviations from M10 principles, regulations, and method SOPs, followed by a summary of potential study impact. Outputs are verified by FDA staff. | No FDA data were used to train, fine-tune or evaluate | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | SCANS Facility Role Predictor | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI was used to identify facilities that are either Active Pharmaceutical Ingredient (API) or Finished Dosage Form (FDF) manufacturers that were either missed or misclassified in regulatory tracking systems. | Successfully identified previously missed facilities without manually reviewing documents, improving FDA's ability to maintain comprehensive oversight of the drug manufacturing supply chain | The output is a classification of Non-manufacturer, API, FDF, or API\FDF | 24/07/2026 | b) Developed in-house | No | The output is a classification of Non-manufacturer, API, FDF, or API\FDF | We used 356H documents, which are vendor submitted documents describing facilities used in the production of a drug product. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: Document Room Submission AI-Assisted Categorization Previously: Document Room Submission Auto-categorization | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Current document room submission categorization process is very manual and costly, impacting end-to-end regulatory review acceleration by creating bottlenecks, increasing processing times, and reducing overall efficiency in the review workflow. | This process enhancement optimizes resource allocation by freeing up personnel for higher-value scientific review activities, improves regulatory predictability for industry sponsors through standardized processing, and strengthens FDA's ability to respond effectively to public health priorities while maintaining comprehensive audit trails and compliance standards. | Submission category and subcategory as well as submission metadata | Submission category and subcategory as well as submission metadata | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: AI-Assisted Drug Review Letter Drafting Previously: Drug Review Letter Generation using GenAI | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This approach to AI-assisted document drafting provides faster implementation with minimal technical requirements, future-proofs document generation capabilities, reduces ongoing maintenance costs, and eliminates technical complexity by replacing code-heavy solutions with dynamic AI prompts. | This approach to AI-assisted document drafting provides faster implementation with minimal technical requirements, future-proofs document generation capabilities, reduces ongoing maintenance costs, and eliminates technical complexity by replacing code-heavy solutions with dynamic AI prompts. | Document content that can be easily embedded either in a PDF/Word document or be shared in an email. | Document content that can be easily embedded either in a PDF/Word document or be shared in an email. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDRH | Medical Data Enterprise Artificial Intelligence (MDE AI) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | Create efficiencies in the regulatory review processes for medical devices; reduce administrative burden to staff and allow them to focus their expertise on scientific and clinical work and not administrative processes | Improved efficiency in the administrative overhead of regulatory review workflows | Outputs support regulatory review and include deficiency text adherence to 4PH, insights to support premarket review, data integrity concerns, signal alerts, etc. | 23/09/2026 | b) Developed in-house | Yes | Outputs support regulatory review and include deficiency text adherence to 4PH, insights to support premarket review, data integrity concerns, signal alerts, etc. | Labeled data sets of FDA-specific premarket and postmarket data | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDRH | COMET (Consult Memo Assistant) | a) Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Efficiency of the premarket regulatory review | Improved efficiency in regulatory review workflows using advanced AI tools to leverage institutional knowledge in specific product areas | AI assisted review process analysis with suggested deficiencies | AI assisted review process analysis with suggested deficiencies | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/HFP | Food AI Decision Engine (FAIDE) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Prioritize limited regulatory resources and maximize public health protection. | Reduced regulatory burden on establishments with a lower probability of being violative or causing public health harm; more efficient and effective regulatory oversight. | Probability of being violative per the model's classifier, and whether that probability is above the model's recommended threshold (optimizing sensitivity and specificity). | 23/08/2026 | b) Developed in-house | Yes | Probability of being violative per the model's classifier, and whether that probability is above the model's recommended threshold (optimizing sensitivity and specificity). | Internal FDA sample and inspection data, third-party purchased and open-source data. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/HFP | Warp Intelligent Learning Engine (WILEE) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identify emerging chemical signals and violative food substances by analyzing a large data set in a fraction of the time that it would have taken scientific reviewers to analyze the publications. | By enhancing signal detection and chemical hazard forecasting capabilities, this tool can help anticipate and prioritize hazards, accelerate decision making and proactively mitigate risk to consumers. | A prioritize list of emerging signals and an interactive view of supporting documentation/factors. | 23/03/2026 | c) Developed with both contracting and in-house resources | In-house | Yes | A prioritize list of emerging signals and an interactive view of supporting documentation/factors. | Internally generated data during the premarket review process, web data collated from web crawls and a commercial data aggregator, scientific publications retrieved with API calls, grant data published by NIH. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/HFP | Rapid Intuitive Pathogen Surveillance (RIPS) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identify and prioritize incoming sources of potential foodborne outbreaks, maximizing public health by reducing the time burden on regulators. | Enhanced WGS signal detection capability allowing regulators to catch emerging foodborne outbreaks before it can cause widespread public harm. | Probability that a environmental food source is regulated by the FDA. | 25/02/2026 | b) Developed in-house | No | Probability that a environmental food source is regulated by the FDA. | Publicly available WGS metadata from NCBI. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/HFP | AI-Powered Assistant for Pathogen Detection (AIPD) | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AIPD is designed to address several key challenges in data analysis for foodborne pathogens, such as accessibility barriers for data sources, manual workflow overhead, knowledge gaps in tool selection, and complex project management. | The expected benefits include enhanced data analysis efficiency, improved food safety surveillance, better resource utilization, and knowledge transfer and training. | The AIPD produces AI-assisted data reports | The AIPD produces AI-assisted data reports | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/HFP | Product Label and Text Extraction System (PLATES) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Manual food label data extraction causes slow data accessibility and insights. Manual data processing and standardization causes slow data accessibility and insights. Decentralized food label data limits research and regulatory processing of industry compliance and health impacts. | Reduced burden to HFP reviewers and data scientists reviewing and analyzing food product label data, including ingredient and nutrition research. The capabilities have significantly accelerated the data extraction and entry process, providing standardized and parsed structured data 35.29x faster than the manual process (reducing the manual burden by 97.08%). | The system includes a user interface that allows users to upload food product images to receive extracted, standardized (utilizing FoodTrak standards), and metadata attached, structured data for 30+ key food data elements that can be reviewed and saved , exported, or published to downstream databases. | 24/06/2026 | c) Developed with both contracting and in-house resources | Trigent Solutions Inc, Digitrix LLC | Yes | The system includes a user interface that allows users to upload food product images to receive extracted, standardized (utilizing FoodTrak standards), and metadata attached, structured data for 30+ key food data elements that can be reviewed and saved , exported, or published to downstream databases. | Internal FDA FoodTrak and OLOAS data. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/HFP | Data Ingestion and Content Explorer (DICE) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Multiple stakeholders across HFP have the business need to search for content within artifacts and documents uploaded to various systems. For systems like CARA and CARTS, document search capabilities are limited due to the Appian technology stack utilized by these systems. As the Human Food Data Platform continues to grow, it will also need to provide SMEs with the capability to search within the data platform. Offices need a quicker way to search for content within documents and databases to find relevant data across a multitude of use cases including regulatory and compliance reviews, outbreak response investigation, and research tracking and administration. Additionally, multiple HFP offices have business processes requiring extracting structured data from unstructured documents for data analysis, regulatory reviews, and other business intelligence insights which are currently supported through manual operations. DICE will enable SMEs to obtain properly formatted structured data from unstructured data sources. | Accelerates the time for subject matter experts (SMEs) to find relevant data and content lost within images, hand-written documents, emails, and other artifacts and provides this in in a one-stop-shop user experience. Allows users to search through millions of documents quickly and makes data accessible to everyone in the HFP and not just those who have backend access. Extracting text from these artifacts makes it available for further analysis and natural language processing. Data can be further processed to detect sentiment, entities, key phrases, syntax, and topics. AWS and API based architecture brings a flexible and scalable framework to HFP to facilitate search use cases while enabling a cost-effective solution. Shared infrastructure for unstructured and structured intelligent search capabilities minimizes cost across CFSAN offices who have this same need. | The system includes a user interface that allows users to view returned search results, templatize unstructured documents using the intelligent document processing workflow, and view extracted text with confidence scores from unstructured documents. | 24/07/2026 | c) Developed with both contracting and in-house resources | Trigent Solutions Inc, Digitrix LLC | No | The system includes a user interface that allows users to view returned search results, templatize unstructured documents using the intelligent document processing workflow, and view extracted text with confidence scores from unstructured documents. | CARTS system data, CARA system data. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/OC | Smart Solution for Docket Management (SSDM) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the extremely labor-intensive and time consuming process of manually collating, de-duplicating, and categorizing public comments on FDA dockets. | 1. Significant time and resource savings: The platform aims to save the Agency a substantial number of staff hours by assisting in reducing redundant and time-consuming tasks that can take weeks to complete manually. 2. Enhanced processing capacity: The platform will enable FDA to effectively handle large-scale comment volumes. 3. Improved accuracy and quality: The AI-powered deduplication, topic modeling, and keyword flagging can potentially enhance the overall quality of comment processing while reducing human error in manual sorting. | The AI-enabled tool provides two main outputs: 1) a line listing excel file that organizes comments into groups based on similarity/deduplication thresholds and tags them with AI-identified and SME-approved keywords and topics; and 2) a comment summary word report that provides structured analysis of themes, key performance indicators, and submitter group breakdowns to assist in regulatory decision-making. | 25/07/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | The AI-enabled tool provides two main outputs: 1) a line listing excel file that organizes comments into groups based on similarity/deduplication thresholds and tags them with AI-identified and SME-approved keywords and topics; and 2) a comment summary word report that provides structured analysis of themes, key performance indicators, and submitter group breakdowns to assist in regulatory decision-making. | Public comments on various FDA dockets | No | k) None of the above | Yes | Will explore making the code open source - but not there yet. | |||||||||||||
| Department Of Health And Human Services | HHS/FDA/OC | Elsa GenAI Chat Tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Generative AI | Elsa is designed to support FDA employees by providing clear, accurate information and assistance with work-related tasks. Elsa's primary purpose is to help streamline information access and decision-making processes within the FDA context. | Elsa quickly synthesizes and summarizes information, breaking down complex topics to support faster, more informed decision-making, helps refine communication for maximum impact - from brainstorming and outlining content to drafting and proofreading and helps employees quickly identify key information across multiple sources. | Elsa can generate text-based responses in paragraphs, bullets, or even in tabular format. | 25/06/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Elsa can generate text-based responses in paragraphs, bullets, or even in tabular format. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/OCS | AI-Assisted Systematic Review and Validation of Analytical Worksheets | a) Pre-deployment The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Manual review and validation of analytical worksheets can create bottlenecks in regulatory processes, introduces potential human error, and limits scalability of quality assurance procedures across FDA operations. Analytical worksheets serve as critical evidence in legal proceedings, enforcement actions, and regulatory decisions affecting public health and safety. These worksheets are the legal documents that will be used and referenced in a court of law and legal proceedings if FDA determines regulatory action should be taken in accordance with the Federal Food, Drug, and Cosmetic Act and subsequent amending supplements codified in Title 21of the United States Code. | Increased efficiency in worksheet validation processes, standardized review procedures ensuring consistency in legal documents, faster turnaround times for analytical work supporting enforcement actions, enhanced quality assurance for evidence used in court, reduced human error in legally significant documents, and improved consistency in regulatory processes supporting FDA's mission to protect public health. | Assessment summaries identifying errors and inconsistencies in court-admissible documents | Assessment summaries identifying errors and inconsistencies in court-admissible documents | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/OCS | AI-Generated Data Processing and Visualization Code Development | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Manual coding for data processing and visualization creates barriers to AI adoption, requires specialized expertise not available across all teams, and limits agency ability to maximize value of existing data investments as directed by OMB M-25-21. | Accelerated development of data processing workflows, increased access to advanced analytics capabilities, reduced dependency on specialized programming skills, improved consistency in data visualization standards, and enhanced agency AI maturity through automated code generation capabilities. | AI assisted code generation for Excel macros and scripts, Power BI formulas and visualizations, data processing algorithms, automated dashboard templates, and reusable code libraries. | AI assisted code generation for Excel macros and scripts, Power BI formulas and visualizations, data processing algorithms, automated dashboard templates, and reusable code libraries. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/OCS | AI-Enhanced FDA Regulated Commodity Consumption Pattern Analysis | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Traditional analytical methods for determining consumption patterns of FDA-regulated commodities are limited in scope and processing speed which hinders comprehensive market surveillance, trend analysis, and evidence-based regulatory science research necessary for informed policy development. | Identification of commodity consumption patterns and trends supporting regulatory science, enhanced understanding of FDA-regulated commodity consumption, improved research capabilities for market surveillance, better-informed policy development through data-driven insights, accelerated evidence generation for regulatory decision-making, and advanced analytical capabilities supporting FDA's public health mission. | Consumption analysis reports, pattern and trend identification reports, trend predictions and forecasting models, market behavior insights and statistical summaries, correlation analyses between consumption patterns and regulatory factors, and research projects supporting real world evidence-based regulatory science. | Consumption analysis reports, pattern and trend identification reports, trend predictions and forecasting models, market behavior insights and statistical summaries, correlation analyses between consumption patterns and regulatory factors, and research projects supporting real world evidence-based regulatory science. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/OCS | AI-Enhanced High-Dimensional Matrix Dataset Trend and Correlation Analysis | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | High-dimensional datasets present analytical challenges that exceed traditional statistical methods' capabilities, limits the ability to extract meaningful insights from complex regulatory data structures, and hinders evidence-based regulatory science advancement. | Discovery of previously unidentified patterns and correlations in regulatory data, enhanced research productivity supporting HHS/FDA legal mandates and mission, improved data utilization efficiency maximizing taxpayer investment, advancement of regulatory science through sophisticated analytical capabilities, and development of innovative approaches to complex data analysis challenges. | Correlation reports identifying key relationships in regulatory data, trend analysis reports supporting regulatory science, pattern identification summaries for complex datasets, statistical significance assessments, and advanced data visualization outputs which informs real world evidence-based decision-making. | Correlation reports identifying key relationships in regulatory data, trend analysis reports supporting regulatory science, pattern identification summaries for complex datasets, statistical significance assessments, and advanced data visualization outputs which informs real world evidence-based decision-making. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/ODT | No | ChatBot for Safety Reporting Portal Adverse Events and Product problems submissions | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Increases data integrity and aid with routing the user to the correct commodity group to submit their adverse event or product problem. DRUID AI, a COTS tool, boasts a comprehensive suite of functionalities, encompassing conversational flows and seamless diverse data sources integrations such as SQL, ServiceNow, UiPath, API, and knowledge base services. Its sophisticated Natural Language Processing and Understanding capabilities empower precise interpretation of user queries across various languages and dialects. | Increased user satisfaction by saving time providing faster form completion and less confusion on where to report their adverse event or product problem. | The AI routes to the correct form and helps the user to complete the report faster and endures data integrity. The outputs uses knowledgebase to answer questions, format responses, routes to correct forms, and uses API to submit the report to SRP without having to use the existing legacy app. | 24/03/2026 | c) Developed with both contracting and in-house resources | Druid | Yes | The AI routes to the correct form and helps the user to complete the report faster and endures data integrity. The outputs uses knowledgebase to answer questions, format responses, routes to correct forms, and uses API to submit the report to SRP without having to use the existing legacy app. | Internal FDA structured and unstructured data from FDA.GOV web crawling | No | Yes | not publicly available | k) None of the above | Yes | No | not publicly available | |||||||||
| Department Of Health And Human Services | HHS/FDA/ODT | Machine Learning as a Service: Translate and extract text from images using AI | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This AI project is designed to solve the problem of reviewing foreign products, extracting ingredient lists, and border inspections | The ability to quickly translate and extract lists of ingredients from foreign food & drug product labels without the need for human translators | Translated text into English, in JSON format | 22/01/2026 | c) Developed with both contracting and in-house resources | Precise | Yes | Translated text into English, in JSON format | Makes use of Google Translation Hub | No | k) None of the above | Yes | https://git.fda.gov/FDA/OIMT/mlaas | |||||||||||||
| Department Of Health And Human Services | HHS/FDA/ODT | Machine Learning as a Service: Extract data from product labels, business forms, and image files | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This AI project is designed to assist in capturing data submitted to & reviewed by the FDA by extracting text and data, determining its structure, and saving the information in a more useful format | The ability to extract information such as nutrition information and ingredients from product labels, tabular data, invoices and receipts, and handwritten forms without the need to have users retype or copy & paste the data into FDA applications. | Parsed text and data structured in JSON format, with key/value pairs where appropriate (such as for specific fields in a form or nutrient name & amount on a product label) | 22/01/2026 | c) Developed with both contracting and in-house resources | Precise | Yes | Parsed text and data structured in JSON format, with key/value pairs where appropriate (such as for specific fields in a form or nutrient name & amount on a product label) | Makes use of Google Translation Hub | No | k) None of the above | Yes | https://git.fda.gov/FDA/OIMT/mlaas | |||||||||||||
| Department Of Health And Human Services | HHS/FDA/OII | Filer Evaluation prioritization using risk-based decision Machine Learning approach | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The FDA's Office of Information Operations (OIO) has an opportunity to enhance its evaluation capabilities across over 4,000 filers in the current FDA inventory by implementing a systematic, data-driven approach to risk assessment and prioritization. By developing standardized evaluation processes and integrated analytical tools, OIO can optimize resource allocation, improve consistency in risk identification, and strengthen the FDA's capacity to effectively protect public health through targeted regulatory oversight. | OIO is responsible for filer evaluations. There are over 4,000 filers in the current FDA inventory and this ML-based risk scoring approach to identify high-risk filers reduces the burden of sorting through the information manually and provides a standard process for conducting evaluations for the staff. An interactive dashboard has been developed that displays model outputs in various forms for staff use. | The ML-based model provides a complete list of filers with all the relevant information along with their relative risk scores for FDA staff to conduct evaluation of the filers. | 23/01/2026 | c) Developed with both contracting and in-house resources | Precise Software Solutions, Inc. | Yes | The ML-based model provides a complete list of filers with all the relevant information along with their relative risk scores for FDA staff to conduct evaluation of the filers. | Import operations data including but not limited to filer evaluation history, corrections to transmitted data, database lookup failures, filer table record creation dates, PREDICT scores | Yes | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/OII | Electronic translational services for regulatory documents for articles offered for import | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | FDA' Office of Information Operations (OIO) seeks to implement automated translation capabilities for foreign language documents essential to import operations. By integrating translation services directly into OIO staff workflows, the Agency can improve import screening to better serve stakeholders while maximizing staff capacity for core public health protection activities. | The implemented solution automated the electronic translation of inspection/investigation and import documents, labels, industry guidance, materials related to policy and regulatory, presentations and educational records. This automation significantly reduced time spent by staff to translate documents, reduced the need to find a translator to read and understand foreign language documents, increased reliability and timeliness for enforcement actions, increased destruction of misbranded FDA regulated products at the IMFs, and increased the ability to provide regulatory materials in foreign languages. | A translation service interface developed within an imports entry review system that utilizes Google Translate API provides the required translation of the entries entering US supply-chain. This will provide translation to the FDA imports staff without seeking external solutions and is integrated in the current system used by the consumer safety officers who are conducting investigations, import operations etc. | 24/04/2026 | c) Developed with both contracting and in-house resources | Google Translate API / MLaaS / Azure | Yes | A translation service interface developed within an imports entry review system that utilizes Google Translate API provides the required translation of the entries entering US supply-chain. This will provide translation to the FDA imports staff without seeking external solutions and is integrated in the current system used by the consumer safety officers who are conducting investigations, import operations etc. | N/a | Yes | N/a | k) None of the above | Yes | N/a - GitHub code is not open source/publicly available - https://git.fda.gov/FDA/OIMT/mlaas | N/a | |||||||||||
| Department Of Health And Human Services | HHS/FDA/OII | AI-Powered Video Analytics for Law Enforcement | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | AI is used to filter relevant video, but outputs are verified by humans, decisions/actions are performed by humans | Computer Vision | The AI is intended to solve the challenge of manually reviewing large volumes of surveillance video, which is time-consuming, labor-intensive, and prone to human error. | Faster identification of persons, vehicles, and events of interest through AI-powered video search and filtering. Improved accuracy and objectivity in surveillance review and analysis. Increased situational awareness via real-time alerting and behavior detection. Greater operational efficiency, enabling limited staff to manage larger video workloads. Data-driven decision-making supported by trend analysis and visual dashboards. | - Object-level detections: bounding boxes with classifications (e.g., person, vehicle type, animal), attributes (e.g., clothing color, bag, face mask), and movement patterns. - Appearance-based search results: lists of matching individuals or vehicles based on facial features, clothing, or license plate. - Real-time alerts: triggered events based on predefined rules (e.g., line crossing, group formation, presence of a vehicle type), sent via connected systems.- - Visual summaries: Video Synopsis® clips that compress hours of activity into short, layered visualizations for faster review.- - K34Dashboards and analytics: aggregated data on movement, dwell time, crowding, object counts, and traffic patterns to inform operational decisions. | 23/01/2026 | a) Purchased from a vendor | Milestone | Yes | - Object-level detections: bounding boxes with classifications (e.g., person, vehicle type, animal), attributes (e.g., clothing color, bag, face mask), and movement patterns. - Appearance-based search results: lists of matching individuals or vehicles based on facial features, clothing, or license plate. - Real-time alerts: triggered events based on predefined rules (e.g., line crossing, group formation, presence of a vehicle type), sent via connected systems.- - Visual summaries: Video Synopsis® clips that compress hours of activity into short, layered visualizations for faster review.- - K34Dashboards and analytics: aggregated data on movement, dwell time, crowding, object counts, and traffic patterns to inform operational decisions. | Yes | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/OII | Computer Vision to translate and mine Product Labeling photos to analyze labeling for potential violations | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | The Computer Vision (CV) project aims to automate label extraction (identification of label information) for regulatory compliance, reducing manual effort and improving accuracy in detecting label discrepancies. | Computer Vision | The intent is to reduce the amount of time import operations users spend reviewing product labeling of imported products for violations. | Reduce time to spot violations on imported products increasing efficiency of reviews. | Label Text extraction and violation(s) Detection | Label Text extraction and violation(s) Detection | |||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/OII | Intelligent Document Processing to analyze current import entry documentation for potential discrepancies | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The intent is to assist in the process of discrepancy identification between documentation submitted by trade and CBP line data submitted. | By streamlining the manual document review process for standard entry documentation from trade, we can significantly reduce the time required for each line review, freeing up substantial analyst capacity to focus on high-risk shipments that pose greater threats to public health and safety. This efficiency improvement enables better resource allocation, reduces processing bottlenecks, and supports faster clearance times for compliant shipments while maintaining robust oversight. The enhanced operational efficiency directly supports FDA's core mission by enabling more targeted, risk-based resource deployment and improving both trade facilitation and import safety program integrity. | List of discrepancies between document data and CBP line level information | List of discrepancies between document data and CBP line level information | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | fac.gov | AI Audit Resolution Assistant (AIARA) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Agentic AI | The DFI PRF AIARA project leverages AI to streamline the single audit process. The team utilizes the Robotic Process Automation (RPA) to extract data from the Federal Audit Clearinghouse to create templates for both the audit notification and management decision letters, and to generate a comprehensive report of the auditee and its findings. | Since its launch, the AIARA has successfully processed and resolved 73 audits. Automation has resulted in an estimated total of 276 hours of work saved. The AIARA has significantly enhanced both efficiency and consistency in the audit resolution process. | HRSA utilizes generative AI to streamline the Single Audit resolution process and the creation of Management Decision Letters to formally close out and resolve Single audit findings. The AI Audit Resolution Assistant (AIARA) includes a vector database comprised of Single audit documents assigned to HRSA, creates a retrieval augmented generation which integrates with a large language model to intelligently summarize audit findings and recommendations, and provides chatbot capability and reduce the cognitive load on HRSA auditors for any audit-specific questions. | 24/07/2026 | c) Developed with both contracting and in-house resources | Mindpetal | Yes | HRSA utilizes generative AI to streamline the Single Audit resolution process and the creation of Management Decision Letters to formally close out and resolve Single audit findings. The AI Audit Resolution Assistant (AIARA) includes a vector database comprised of Single audit documents assigned to HRSA, creates a retrieval augmented generation which integrates with a large language model to intelligently summarize audit findings and recommendations, and provides chatbot capability and reduce the cognitive load on HRSA auditors for any audit-specific questions. | Single Audit report from FAC.gov | fac.gov | No | No | k) None of the above | Yes | No | ||||||||||
| Department Of Health And Human Services | HHS/HRSA | Knowledge Navigator | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | The objective is to develop an AI model that can answer detailed and complex questions about the key programmatic document, Application and Program Guidance (APG), that is issued annually before the application cycle opens. The PoC LLM has Applicant and Program Guidance (APG) documents for 10 loan repayments and scholarship programs. | This will allow loan repayment and scholarship analysts and call center agents to better respond to public inquiries from applicants and participants | BHW has implemented a proof of concept Generative AI (GenAI) Large Language Model (LLM) Knowledge Navigator (KN) to support National Health Service Corps (NHSC) and Nurse Corps loan repayment and scholarship program analysts and call center agents respond to program applicants and participants. | 24/06/2026 | c) Developed with both contracting and in-house resources | Publicis Sapient | Yes | BHW has implemented a proof of concept Generative AI (GenAI) Large Language Model (LLM) Knowledge Navigator (KN) to support National Health Service Corps (NHSC) and Nurse Corps loan repayment and scholarship program analysts and call center agents respond to program applicants and participants. | Existing program application guidance documentation (APGs) | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Medical records summarization | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI will reduce the amount of time required to review and evaluate a claim. | The AI will take medical records number in the thousand and produce a summary document of key element to conduct a claims review. | AI will facilitate the collection and preprocessing of unstructured data, and create a condensed (and indexed) document for AI to intelligently review the thousands of pages of medical/legal documents, as part of the claims review process. | AI will facilitate the collection and preprocessing of unstructured data, and create a condensed (and indexed) document for AI to intelligently review the thousands of pages of medical/legal documents, as part of the claims review process. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | NOFO Compliance Assistant | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Other | HRSA leadership would benefit from an automated solution to evaluate HRSA Notice of Funding Opportunities (NOFO) against dynamically-changing Executive Orders (EO) and OMB memos to ensure NOFO compliance with White House priorities. | This solution will enhance operational efficiency by reducing document creation time from weeks to days while ensuring more consistent, error-free NOFO's through automated quality assurance. The solution is intended to provide more consistent, readable NOFOs and reduce barriers for smaller organizations. | The NOFO Compliance Assistant is an innovative application utilizing large language models (LLMs) to generate first drafts of key policy documents. Inputs can include example documents, style guides, and key policy decisions and other documents in the Knowledge Navigator. The NOFO Compliance Assistant also features an editing tool that scrutinizes drafts for inconsistencies or errors, offering feedback for refinement. Current planning leverages Cloud-based services, LLM, Text processing and analysis tools, Natural Language Generation (NLG) and Text Analysis for the implementation. | The NOFO Compliance Assistant is an innovative application utilizing large language models (LLMs) to generate first drafts of key policy documents. Inputs can include example documents, style guides, and key policy decisions and other documents in the Knowledge Navigator. The NOFO Compliance Assistant also features an editing tool that scrutinizes drafts for inconsistencies or errors, offering feedback for refinement. Current planning leverages Cloud-based services, LLM, Text processing and analysis tools, Natural Language Generation (NLG) and Text Analysis for the implementation. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | PRF Program Chatbot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Answer the inquiries related to GAO, FOIAs, and Litigation quickly and efficiently | PRB leadership is equipped to manage the PRF program effectively, by responding to adhoc inquiries that enhances the customer experience while adapting to current reductions in staff | The app will use retrieval augmented generation (RAG) to answer questions using a knowledge base composed of PRB's programmatic SME documents. The responses will be sourced and cited from program documents so that they are verifiable. This technology will help scale staff access to deep program information and ease the significant burden of key staff turnover who possess historical and institutional knowledge by ingesting their key work products into the AI application. | 25/02/2026 | c) Developed with both contracting and in-house resources | GDIT Inc | Yes | The app will use retrieval augmented generation (RAG) to answer questions using a knowledge base composed of PRB's programmatic SME documents. The responses will be sourced and cited from program documents so that they are verifiable. This technology will help scale staff access to deep program information and ease the significant burden of key staff turnover who possess historical and institutional knowledge by ingesting their key work products into the AI application. | The program is using AWS Bedrock for GenAI services, powered by the Claude 3 Haiku LLM. The knowledge base for the RAG architecture is built on over 2,000 PRF program-specific user guides and documentation. | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Scholar Match | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The current candidate evaluations and placement process is complex and challenging for both analysts and participants. By enhancing this process with AI/ML support could optimize resource allocation and candidate satisfaction, significantly impacting workforce distribution and efficiency in critical health areas. | This would improve the process of matching NHSC and Nurse Corps Scholars going into clinical service in underserved communities. | The Scholar Match (SM) leverages AI to enhance the placement process of NHSC and Nurse Corps scholars in communities of need across the U.S. and territories. By analyzing candidate profiles and regional needs, SM recommends optimal placements, ensuring both the fulfillment of organizational needs and the satisfaction of the candidates. Current planning leverages Machine Learning, Recommendation Systems, Cloud-based platforms and Data analytics services for the implementation. | The Scholar Match (SM) leverages AI to enhance the placement process of NHSC and Nurse Corps scholars in communities of need across the U.S. and territories. By analyzing candidate profiles and regional needs, SM recommends optimal placements, ensuring both the fulfillment of organizational needs and the satisfaction of the candidates. Current planning leverages Machine Learning, Recommendation Systems, Cloud-based platforms and Data analytics services for the implementation. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Scholarship Insight | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The NHSC and Nurse Corps Scholarship reviewers need to receive and score thousands of essays submitted and the manual process delays the application processes. We are looking forward to building a Generative AI program that looks at all of the previous essays and how the scoring rubric was done and train it to do the first cut at scoring the essays. This could enhance the fairness and efficiency of scholarship evaluations, ensuring a thorough review process that supports equitable student opportunities. | Reduce the amount of time required for internal or external reviews to evaluated NHSC and Nurse Corps scholarship applications and ensure that human reviewers are following the appropriate scoring rubric. | The Scholarship Insight (SI) is designed to support the evaluation of scholarship essays for both the NHSC and Nurse Corps Scholarship Programs by providing detailed analysis to human graders. Aligning with directives to ensure human oversight, SI identifies key themes, strengths, and weaknesses in essays, facilitating a more informed grading process without replacing human judgment. Current planning leverages Cloud-based services, Natural Language Understanding (NLU), Text Analysis, LLM API and Data analysis tools for the implementation. | The Scholarship Insight (SI) is designed to support the evaluation of scholarship essays for both the NHSC and Nurse Corps Scholarship Programs by providing detailed analysis to human graders. Aligning with directives to ensure human oversight, SI identifies key themes, strengths, and weaknesses in essays, facilitating a more informed grading process without replacing human judgment. Current planning leverages Cloud-based services, Natural Language Understanding (NLU), Text Analysis, LLM API and Data analysis tools for the implementation. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Site Application Analysis | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The current application review process requires internal analysis to review many complex documents to determine Site Eligibility to employ a member of the NHSC or Nurse Corps. By enhancing this process with AI/ML support could revolutionize and streamline the NHSC, Substance Use Disorder Treatment and Recovery (STAR)and Nurse Corps Site Application Process. | This would improve the current highly manual process for reviewing Site applications for internal analysts but ensure that clinical sites are being approved for NHSC, Nurse Corps, and STAR LRP. | The Site Application Analysis (SA) is designed to support the evaluation of NHSC, STAR and Nurse Corps Site Applications. SA will allow for faster more accurate review of Site Applications and allow BHW Regional Analysts to focus on a higher value task. Current planning leverages Cloud-based services, Machine Learning, Recommendation Systems, Natural Language Understanding (NLU) and Text Analysis for the implementation. | The Site Application Analysis (SA) is designed to support the evaluation of NHSC, STAR and Nurse Corps Site Applications. SA will allow for faster more accurate review of Site Applications and allow BHW Regional Analysts to focus on a higher value task. Current planning leverages Cloud-based services, Machine Learning, Recommendation Systems, Natural Language Understanding (NLU) and Text Analysis for the implementation. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | RefineAI - Enhanced Summary Statements | a) Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | DIR would benefit from an automated system that would correct or solve grammatical issues, redundant comments, and acronym expansion at the merging stage within ARM to increase efficiency in panels and devote more time to discussion with hopes of more useful feedback without the pressure of time constraints due to editing. At least ten percent of overall ORC panel time is devoted to editing in real time for grammatical and duplicative issues. On average, discussion of an application takes 60 minutes; if we save 6 minutes of editing time per application, we can lower hourly rates for contractor support and lower hourly rates for editing post panel. In addition, we can shift focus in panel to content-specific discussion promoting higher quality feedback. | The expected benefits of the automated solution are: - Increasing efficiency during ORC panel discussions, ultimately leading to lower contractor support and ORC costs, by providing a merged summary statement that highlights duplicate comments, expands acronyms, and proposes grammatical fixes at the merging state to devote more time to content-specific feedback in panel discussions - Increasing quality of feedback to applicants | The DIR staff works directly with business owners to set up Objective Review Committees (ORC). Currently, the logistics contractor pulls raw comments from the Application Review Module (ARM) submitted in ARM by the three primary reviewers. The raw comments are sent to reviewers and HRSA staff in the merged summary statement in advance of the ORC for awareness and to facilitate discussion in the ORC panel. However, the Summary Statement Operator (SSO) must correct grammar issues, spell out acronyms, and change to present tense in real time as well as remove duplicates. DIR would like an automated system that would correct or solve these issues at the merging stage to increase efficiency in panels and devote more time to discussion with hopes of more useful feedback without the pressure of time constraints due to editing. | The DIR staff works directly with business owners to set up Objective Review Committees (ORC). Currently, the logistics contractor pulls raw comments from the Application Review Module (ARM) submitted in ARM by the three primary reviewers. The raw comments are sent to reviewers and HRSA staff in the merged summary statement in advance of the ORC for awareness and to facilitate discussion in the ORC panel. However, the Summary Statement Operator (SSO) must correct grammar issues, spell out acronyms, and change to present tense in real time as well as remove duplicates. DIR would like an automated system that would correct or solve these issues at the merging stage to increase efficiency in panels and devote more time to discussion with hopes of more useful feedback without the pressure of time constraints due to editing. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | FOIA Exemption-Aligned Redaction | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | The FOIA staff will leverage this solution to review records in response to FOIA requests, search the internet for publicly available information (names, email addresses, contact information of federal and non-federal individuals), and identify personally identifiable information (PII). The goal is to reduce the burden on the HRSA FOIA staff and process the requests quickly. Automating the manual process will save analysts time and effort and will help reduce/prevent human error. The FOIA staff reviews between 35-50 pages per hour, totaling 600 hours per month. | The expected benefits of the automated solution are: - Reducing the manual effort and the time that the FOIA staff spends on converting non-PDF electronic records to PDF - Identifying information to withhold/redact under one or more FOIA exemptions - Proposing the redaction markings for human QA - Reducing the size of the PDF | HRSA uses agentic AI to propose redactions and comments for potentially sensitive data elements for Freedom of Information Act (FOIA) staff that reviews grant documents. The proposed redactions by the system will include the exemption invoked for data elements that are deemed to be not publicly available. The comments will include a URL citing the source for data elements that are deemed to be publicly available. The output of the solution will be a PDF file including proposed redactions (and exemptions invoked) along with citation comments for review. FOIA staff will review the proposed redactions and add any additional data elements to be considered for redaction. The data elements added by FOIA staff will then be queried by a search engine to determine public availability and proposed as redactions (not available) or comments (available). FOIA staff will then review the accuracy of the proposed redactions and comments. This technology will help to alleviate the FOIA staffs workload and process the requests in a more expedited manner. | HRSA uses agentic AI to propose redactions and comments for potentially sensitive data elements for Freedom of Information Act (FOIA) staff that reviews grant documents. The proposed redactions by the system will include the exemption invoked for data elements that are deemed to be not publicly available. The comments will include a URL citing the source for data elements that are deemed to be publicly available. The output of the solution will be a PDF file including proposed redactions (and exemptions invoked) along with citation comments for review. FOIA staff will review the proposed redactions and add any additional data elements to be considered for redaction. The data elements added by FOIA staff will then be queried by a search engine to determine public availability and proposed as redactions (not available) or comments (available). FOIA staff will then review the accuracy of the proposed redactions and comments. This technology will help to alleviate the FOIA staffs workload and process the requests in a more expedited manner. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | HRSA Data Warehouse ChatBot | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | HRSA Data Warehouse Chatbot to respond to public Inquiries. | Increase transparency by making health data more accessible, actionable, and equitable enabling faster insights, smarter decisions, and broader community engagement | Help public to gain quick access to program data (e.g.; Area Health Resources Files, Find Healthcare services, Service Delivery Sites etc.) | Help public to gain quick access to program data (e.g.; Area Health Resources Files, Find Healthcare services, Service Delivery Sites etc.) | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Code Conversion for PowerBI Migration | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Generate code to support migration and conversion from Tableau to PowerBI to save license cost as well as migration labor costs | Save cost for development of the solution including eliminating contract labour costs | Code that is leveraged to migrate Tableau Reports/Dashboards into PowerBI | Code that is leveraged to migrate Tableau Reports/Dashboards into PowerBI | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | HRSA Fact Sheets | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Generate HRSAs Program Fact Sheets for public use and reduce the cycle time by eliminating manual review and validation process. | Public can immediately find clear, authoritative facts about HRSA programs, funding, workforce, outcomes. Also Strengthens grant proposals, community outreach, and health planning based on accurate, timely information | Provides HRSA Facts Sheets that are validated and can also embed additional comments so that public can get to understand this data and information better | Provides HRSA Facts Sheets that are validated and can also embed additional comments so that public can get to understand this data and information better | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Program Reporting System Knowledge base Chatbot | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Help users get relevant information quickly instead of in lengthy manuals, FAQs, or scattered documentation and reduce support cost. | Provides instant, accurate answers to user queries, reducing the time spent searching for information. Additionally, it reduces call center support costs and minimizes the number of calls from grantees seeking system help, improving operational efficiency. | AI chatbot integrated with post award performance reporting system which serves as a knowledge base for FAQs and system step by step guides. Both grantees and internal HRSA users will use the chatbot to get relevant information faster. The AI chatbot will eventually reduce customer support cost. | AI chatbot integrated with post award performance reporting system which serves as a knowledge base for FAQs and system step by step guides. Both grantees and internal HRSA users will use the chatbot to get relevant information faster. The AI chatbot will eventually reduce customer support cost. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Program Reporting System Natural Language Search | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Simplified Information availability for faster decision making and eliminate the maintenance cost for the legacy SOLRs search server. | Improved efficiency and saves time by presenting information in a simple, natural language format, helping grant program officers make faster, informed decisions and enhancing overall grant performance monitoring. | A simple natural language based global search functionality for post award monitoring | A simple natural language based global search functionality for post award monitoring | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | AI identification of "High-Risk" Health Centers | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Manual identification of high-risk health centers is resource-intensive and may miss key indicators across large datasets. Would support a proactive, data-driven approach to inform site visit schedules and technical assistance planning. | AI-driven risk identification would allow BPHC to better allocate resources, prioritize site visits, and provide tailored technical assistance. This will help improve compliance, operational performance, and ultimately the quality of care delivered by health centers | The system would use predictive analytics and risk modeling to generate a prioritized list of health centers considered "high-risk" based on predefined indicators (e.g., patient safety concerns, poor quality metrics, application anomalies). Outputs will support more targeted oversight and TA deployment schedules. | The system would use predictive analytics and risk modeling to generate a prioritized list of health centers considered "high-risk" based on predefined indicators (e.g., patient safety concerns, poor quality metrics, application anomalies). Outputs will support more targeted oversight and TA deployment schedules. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Targeted Technical Assistance | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | AI will help identify the health centers/ geographic areas that will benefit the most from targeted technical assistance on clinical topics for improving quality metrics | This will lead to better quality of care and improve health outcomes for patients. At the same time this will reduce costs and staff burnout and increase patient satisfaction with their care | Outputs will be: 1. List of health centers requiring TA on specific clinical topics (in alignment with MAHA) 2. More focused ROI by implementing the specific TA identified through the process | Outputs will be: 1. List of health centers requiring TA on specific clinical topics (in alignment with MAHA) 2. More focused ROI by implementing the specific TA identified through the process | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Program Compliance and Reporting Knowledge Base Chatbot | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Users often need to navigate multiple technical assistance webpages, manuals, documents, FAQs, or submit inquiries through the contact form to find answers. This can be time-consuming and increases time and support capacity. | Provides instant, accurate answers to user inquiries, reducing the time spent searching for information. It would also reduce call center and staff support costs and increase capacity to assist with more complex issues by minimizing calls and inquiries from grantees seeking publicly available information, improving operational efficiency. | AI chatbot integrated with programmatic requirements and post award performance reporting system which servers as a knowledge base for FAQs and system step by step guides. Both grantees and internal HRSA users will use the chatbot to get relevant information faster. The AI chatbot will eventually reduce customer support cost. | AI chatbot integrated with programmatic requirements and post award performance reporting system which servers as a knowledge base for FAQs and system step by step guides. Both grantees and internal HRSA users will use the chatbot to get relevant information faster. The AI chatbot will eventually reduce customer support cost. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NICHD RPAB AI/ML NICHD Relevance Model | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | b) Presumed high-impact but determined not high-impact | Not high-impact | The primary objective is to improve the efficiency, accuracy, and consistency of grant application referral assignments while streamlining the internal process for referring new applications. The AI-generated output supports subject matter experts by providing additional information that helps them make faster decisions and prioritize applications for review. All AI output is used solely as an assistive tool, and every referral decision undergoes 100% human review. Therefore, this use case does not meet the definition of a high-impact AI. | Classical/Predictive Machine Learning | The primary objective is to enhance the efficiency, accuracy, and consistency of grant application referral assignments, while reducing the burden on Subject Matter Experts in RPAB. The AI system is expected to streamline the process of internal referral of new grant applications. | This AI use case increases the efficiency of the grant referral process and ensures difficult applications are triaged in a quicker manner. | Results are presented as class predictions and class probabilities as recommendations for referral liaisons. | 25/04/2026 | b) Developed in-house | No | Results are presented as class predictions and class probabilities as recommendations for referral liaisons. | NIH IMPAC II funded and unfunded grant application data is used. Unstructured text from project abstract, specific aims, and title are encoded and vectorized for model training and inference. Fiscal year, activity code, and RCDC terms are transformed via one-hot encoding for use in model training and inference. PII related to individuals associated with the grant is kept intact to preserve the integrity of the use case of grant application referral and the trends of researchers' focus on particular scientific areas. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Applying for Grants Chat Bot | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Save time for applicants by guiding users through the application process step-by-step, recommending additional resources for grant writing, and helping determine eligibility. | It can guide users through the application process step-by-step, recommend additional resources for grant writing, and help determine eligibility to save time. | Input: Grants and Funding information/processes and FAQs for prospective grantees. Output: Targeted resources related to probing questions for end users. | Input: Grants and Funding information/processes and FAQs for prospective grantees. Output: Targeted resources related to probing questions for end users. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Assisted Referral Tool (ART) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This tool suggests which study sections an application might best fit for peer review. The applicant can then get a better idea of the topics being reviewed in that study section and the expertise the reviewers are likely to have. This tool is also used internally to help efficiently assign applications into study sections. | This tool provides information to applicants about the context of the eventual review of their applications and increases efficiency of internal study section assignments. | SRG recommendations | 15/01/2026 | b) Developed in-house | Yes | SRG recommendations | Previous grant applications submitted to NIH and assigned to the same study sections. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Autism Spectrum Disorder (ASD) Classification Model for Children using Deep Neural Network | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Automated approaches for table extraction | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Automated Basic-Applied Categorization of extramural grants | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This machine-learning algorithm uses information about NIMH-funded research projects to categorize them as basic or applied research per the federal definitions for each. | The algorithm is intended to be consistent in identifying basic and applied research, reduce burden of review by NIMH staff, and provide a complementary perspective to human review. | Categorization of research as basic or applied. | Categorization of research as basic or applied. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Biomedical Citation Selector (BmCS) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Time-consuming human review of individual journal articles from multidisciplinary journals to determine inclusion in MEDLINE. | More efficient and effective indexing and inclusion of relevant journal articles, standardization of citation record selection, and reduced processing time | Sets of citation records that are classified as relevant to biomedicine and the life sciences. | 23/01/2026 | b) Developed in-house | No | Sets of citation records that are classified as relevant to biomedicine and the life sciences. | PubMed citation data that was submitted by publishers and stored in the agency database was used. | No | k) None of the above | Yes | https://github.com/ncbi/biomedical-citation-selector | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Clinical Trial Predictor | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Applications that propose clinical trials that are submitted to NOFOs that do not allow clinical trials cannot be funded, no matter how well they do in review, because they were not reviewed using all appropriate clinical trial criteria. This application allows NIH to identify clinical trial applications submitted to NOFOs that do not allow clinical trials so they can be withdrawn before being reviewed and potentially transferred to a NOFO that does allow clinical trials. | The AI tool predicts whether grant applications may involve clinical trials based on the text of their titles, abstracts, narratives, specific aims, and research strategies. It is very difficult to deal with misclassified CTs that make to review on a CT not allowed FOA: no matter how good the score is, the IC cannot fund them. The CT prediction algorithm is used to help identify potential CTs on CT not allowed NOFOs, mainly the parent R01. | Input: IMPAC II application data, including titles, abstracts, narratives, specific aims, and research strategies. Output: Prediction of possible clinical trial submitted to a non-CT NOFO. | 23/05/2026 | b) Developed in-house | No | Input: IMPAC II application data, including titles, abstracts, narratives, specific aims, and research strategies. Output: Prediction of possible clinical trial submitted to a non-CT NOFO. | All data come from the internal NIH IMPAC II database. | No | k) None of the above | Yes | It is not publicly available. | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | ClinicalTrials.gov Protocol Registration and Results System Review Assistant | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The quality control review process at ClinicalTrials.gov is time- and resource-intensive. | Some potential benefits include increased efficiency, consistent reviews, resource optimization, and increased scalability. | Prediction of whether a quality issue is present in study registration or results records. | 23/08/2026 | c) Developed with both contracting and in-house resources | No | Prediction of whether a quality issue is present in study registration or results records. | ClinicalTrials.gov study record submissions | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Collections Summarization Chatbot | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Complex, time-consuming methods for discovery of content, and difficulty for users to understand the scope of content through summarization. | Improved discovery and understanding of content in NLM Digital Collections | Summary of the resource presented in text format. | Summary of the resource presented in text format. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | CSR Public Chatbot (CPC) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | This tool answers questions from the public, potential reviewers, and applicants by directly pulling relevant information from official government web pages, instead of searching FAQ lists. | Applicants and Reviewers can get their NIH grant applications and peer review questions answered quickly and efficiently | This tool recommends original source material that seems to answer the user's question, and allows the user to check the accuracy of the answer. | 22/01/2026 | b) Developed in-house | Yes | This tool recommends original source material that seems to answer the user's question, and allows the user to check the accuracy of the answer. | Applicant FAQs and publicly available content from public.csr.nih.gov website | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | DAIT AIDS-Related Research Solution | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | DAIT POs need to identify grant applications that involve AIDS-related research (ARR) so they can evaluate them for additional funding. | DAIT ARR suggests prioritization of grant applications that are likely to include AIDS-Related Research to assist POs in prioritizing which grants to select, which improved the review time and quality of review for ARR applications. | This application extracts text from grant applications as input, and then uses classification models to predict the priority and category of each grant application as the output. The output is shared along with other grant application metadata in a custom module. | 18/01/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | This application extracts text from grant applications as input, and then uses classification models to predict the priority and category of each grant application as the output. The output is shared along with other grant application metadata in a custom module. | A dataset was curated to train the model and is evaluated manually by user input. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Detecting Overlapping Science (DOS) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This tool detects applications that represent potential duplicate funding (funding the same research in different projects). It examines applications as they are submitted to the NIH and sends a report to relevant personnel in the agency. | Detect and prevent duplicate funding with real-time examination of incoming grant applications in a speedy manner. | This tool recommends a more careful examination of flagged applications to determine if an application is a duplicate of existing funding, in violation of NIH policy | 23/01/2026 | b) Developed in-house | Yes | This tool recommends a more careful examination of flagged applications to determine if an application is a duplicate of existing funding, in violation of NIH policy | No | k) None of the above | Yes | ||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Detection of Implementation Science focus within incoming grant applications | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This tool uses natural language processing and machine learning to calculate an Implementation Science score that is used to predict if a newly submitted grant application proposes to use science that can be categorized as "Implementation Science." | The AI report tool assigns the grant application to a particular division for routine grants management oversight and administration. | For inputs, it leverages NHLBI application text (title/abstract) and classification categories in Dimensions for NIH. For outputs, the report provides NHLBI application metadata (unchanged) and a score for relevancy to implementation science. | 20/01/2026 | a) Purchased from a vendor | Digital Science | Yes | For inputs, it leverages NHLBI application text (title/abstract) and classification categories in Dimensions for NIH. For outputs, the report provides NHLBI application metadata (unchanged) and a score for relevancy to implementation science. | Leverages NIH application data from IRDB. | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Expansion of Generative AI (GenAI) Caption Generation for all Collections Videos | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | High cost and low efficiency of video transcript and caption generation. | Improved standardization and accuracy of generated video captions | AI generated captions in text format | 20/11/2026 | b) Developed in-house | No | AI generated captions in text format | Audio extracted from U-Matic videos in MP4 format, hosted on collections.nlm.nih.gov | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Federal IT Acquisition Reform Act (FITARA) Tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Contracting officers use this tool to help identify whether a Statement of Work meets the criteria of Federal IT Acquisition Reform Act (FITARA). | Contracting Officers can use this tool to indicate if Statements of Work are likely to be IT-related, which saves significant manual effort and time required to identify relevant contracts. | User uploads a contract SOW and the FITARA Tool processes and predicts the likelihood on whether or not FITARA applies with a confidence score. The output data from the tool is displayed via a custom module. | 17/01/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | User uploads a contract SOW and the FITARA Tool processes and predicts the likelihood on whether or not FITARA applies with a confidence score. The output data from the tool is displayed via a custom module. | A dataset was curated using NIAID SOWs which were manually labelled to train the classification model. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Generative AI (GenAI) Still Image Tagging | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Other | Lack of a cost-effective mechanism to label and describe digital images. | Enhanced searchability and discoverability of images included in the NLM Digital Collections, increasing access to valuable medical and scientific resources and supporting research and health. | Classifications tags and/or image summary in text form | Classifications tags and/or image summary in text form | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | individual Functional Activity Composite Tool (inFACT) | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Internal Referral Module (IRM) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The output from this AI use case does not drive any agency decision, including the categories listed in section 6 of the memorandum. The output is a recommendation to assign a grant application to the appropriate Agency staff based on the scientific content of the application. The person to whom the grant application has been referred to can accept, reject, or reassign the application based on their expertise. | Classical/Predictive Machine Learning | Automated Assignment of Grants to Program Officers | The original IRM application grew out of a desire to refer applications to the appropriate Program Officer to manage the scientific research that fit their portfolio. This manual referral of grant applications still exists within IRM and has been complemented by use of AI/NLP capabilities. | The outputs are referrals to Program Officers, Program Class Codes, Organizational units - Divisions and Branches and Scientific Research Clusters. | 23/02/2026 | c) Developed with both contracting and in-house resources | Leidos and Highrise | Yes | The outputs are referrals to Program Officers, Program Class Codes, Organizational units - Divisions and Branches and Scientific Research Clusters. | We use eRA grant application data for all fine tuning and optimization for the models. Specially, we extract the title, abstract, specific aims and public health narrative to train our models for prediction. | No | k) None of the above | Yes | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | JIT Automated Calculator (JAC) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | NIGMS requires extra justification to fund investigators whose total grant money exceeds $1.5M. Applicants are required to submit lengthy forms detailing all of their support and those of their key personnel. These forms are quite long and require quite a bit of time to read and tally up funding totals. This NLP searches the entire form and adds up all of the totals to help NIGMS program staff determine total funding for all key personnel of a grant application. | At NIGMS we like to know how much total support an investigator has to ensure that we are not funding PIs who are already adequately resourced. However, JIT Other Support forms consist of many pages of freeform text in PDF format. Thus, it can be quite tedious for program officers (POs) to copy and paste information from these forms into a spreadsheet to determine how much funding a PI has. JAC can perform these calculations for POs automatically (assuming, of course, that the information has been entered correctly by the PIs). | Input: Grant application JIT Other Support Form PDFs from NIH IMPAC II database. Output: Editable spreadsheet of parsed data and funding summary data for each person in the Key Personnel tables of the application. | 23/05/2026 | b) Developed in-house | No | Input: Grant application JIT Other Support Form PDFs from NIH IMPAC II database. Output: Editable spreadsheet of parsed data and funding summary data for each person in the Key Personnel tables of the application. | All data come from the internal NIH IMPAC II database. | No | k) None of the above | Yes | It is not publicly available. | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | LLM Support for Admin Services | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Mapping Sequence Data to Research Outcomes | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Effectively allocating resources needed to process, manage, and store a high volume of sequence data. | More efficient operations and resource allocation | Text summaries identifying in a yes/no fashion if research products were identified | Text summaries identifying in a yes/no fashion if research products were identified | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Medical Text Indexer-NeXt Generation (MTIX) MEDLINE Indexing | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Time-consuming and burdensome manual indexing of MEDLINE citations | Cost-effective, timely, and efficient indexing of MEDLINE citation records. | A set of MeSH terms describing the article topic | 23/05/2026 | b) Developed in-house | No | A set of MeSH terms describing the article topic | The MTIX dataset is approximately 10 million PubMed MEDLINE citations published after 2006. It is publicly available data, used for training and evaluation of the MeSH terms predicted by the algorithm. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | MicroStrategy Evaluation | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NanCI: Connecting Scientists | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | NanCI phone/web application to connect scientists. The app uses AI to match scientific content to users' interests. By collecting papers into a folder, a user can engage the tool to find similar articles in the scientific literature and can refine the recommendations by up or down voting recommendations. Users can also connect with others via their interests and receive and make recommendations via this social network. Users: Cancer Research Trainees at NCI and across the USA. | Cancer research trainees indicate feeling overwhelmed by information and finding things of interest is a challenge. Furthermore they feel isolated. NanCI helps them home in on key content of interest and connect with others who share those interests. | User collects a series of papers by bookmarking them into a folder. AI then uses vector matching to find similar papers. | 23/03/2026 | a) Purchased from a vendor | Google; Barnacle | Yes | User collects a series of papers by bookmarking them into a folder. AI then uses vector matching to find similar papers. | PubMed; Onco Daily | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NBS Virtual Assistant | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Service Desk Operations | A robust knowledge base supports the NBSC user community, significantly reducing service requests to the ONBS Service Desk. This allows Subject Matter Experts (SMEs) to concentrate on critical operational and maintenance activities. | "Supporting all NBSC business workstreams, the ONBS Assistant will enhance user engagement with NBSC Applications by providing comprehensive and personalized assistance A variety of support document types like job aids, CBT training courses, FAQs and knowledge articles are the instructional documents used to teach the tool to answer common questions that come into the core team, and the Service Desk on a regular basis.ONBS Subject Matter Experts regularly update content ONBS Assistant will be hosted in the NIH Business System Cloud (NBSC) and positioned within the ONBS SharePoint portal" | 25/05/2026 | a) Purchased from a vendor | H2O.GPTe | Yes | "Supporting all NBSC business workstreams, the ONBS Assistant will enhance user engagement with NBSC Applications by providing comprehensive and personalized assistance A variety of support document types like job aids, CBT training courses, FAQs and knowledge articles are the instructional documents used to teach the tool to answer common questions that come into the core team, and the Service Desk on a regular basis.ONBS Subject Matter Experts regularly update content ONBS Assistant will be hosted in the NIH Business System Cloud (NBSC) and positioned within the ONBS SharePoint portal" | NBS Training guides, Job Aids, and FAQs. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NCI-DOE Collaboration, MOSSAIC project (Modeling Outcomes using Surveillance Data and Scalable AI for Cancer) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | MOSSAIC applies deep learning natural language processing (NLP) and foundation models to population-based cancer data collected by NCI's Surveillance, Epidemiology, and End Results (SEER) program. DOE's Oak Ridge National Lab (ORNL) has data use agreements (DUAs) with multiple SEER registries to access and train models using SEER data. Two APIs are in production use in the data management system used by the SEER registries -- OncoID which predicts whether a pathology report is related to cancer and OncoIE which extracts key tumor characteristics from unstructured pathology report text. Together these APIs are an important part of moving the US towards near real-time cancer incidence reporting. In addition, a third API OncoMetsID, which predicts whether a pathology report is indicative of metastatic disease, is in a pilot phase to use in conjunction with other sources of information in the registries to identify recurrent disease. | MOSSAIC enhances the infrastructure of the SEER cancer registries by providing tools that can increase the efficiency and accuracy of manual data abstraction by automatically extracting cancer surveillance data elements. SEER registries receive millions of unstructured clinical text documents that must be manually reviewed, leading to a lag in reporting of US cancer incidence trends. Automated tools such as those developed by MOSSAIC will help us achieve near real-time incidence trends and ultimately a more meaningful report card on the status of cancer in the US. | Input: unstructured (free text) cancer pathology reports. Output: varies depending on the algorithm but generally a predicted class (eg, tumor site) and associated relative confidence score that can be used to tune accuracy | 21/01/2026 | c) Developed with both contracting and in-house resources | development -- Oak Ridge National Lab; maintenance -- Information Management Services (IMS) | Yes | Input: unstructured (free text) cancer pathology reports. Output: varies depending on the algorithm but generally a predicted class (eg, tumor site) and associated relative confidence score that can be used to tune accuracy | Data is owned by the NCI SEER registries, which are funded by the NCI | No | k) None of the above | No | https://computational.cancer.gov/view-model-new?f%5B0%5D=project%3Amossaic&search_api_fulltext=&sort_by=title_1&sort_order=ASC&items_per_page=10, https://github.com/DOE-NCI-MOSSAIC | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | NHLBI Chat | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | This is a generic AI chat tool that provides a secure chat interface (similar to public tools like ChatGPT) for NHLBI Chat. This tool enables staff to safely and securely explore how generative AI can be used on their sensitive (but non-PII/PHI) workloads. | NHLBI Chat is a secure LLM tooling providing access to the Azure OpenAI API so that all NHLBI staff can explore generative AI for their day-to-day need. | The Azure OpenAI API accepts text as input and return text as output. Users enter text through a chat interface in a website. | 24/09/2026 | b) Developed in-house | Yes | The Azure OpenAI API accepts text as input and return text as output. Users enter text through a chat interface in a website. | No | k) None of the above | Yes | https://github.com/NHLBI/LLM_Chat_Interface | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NIAMS AI Chatbot Pilot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | To provide a secure, protected environment for NIAMS staff (OD, EP, IRP and IT) to explore, test, and understand how to use AI to be more efficient with a wide variety of administrative tasks such as general research, summarizing/querying documents, drafting emails, generating programming code, and creating presentation outlines?, etc. | The Azure-hosted NIAMS GenAI Chatbot helps employees be more efficient with a wide variety of administrative tasks such as summarizing/querying documents, drafting emails, and creating presentation outlines?, etc. | Input: natural text in the form of user questions, user uploaded documents. Output: Generated text in the form of answers to user questions, generated answers (summaries/queries) based on user documents. | 24/09/2026 | c) Developed with both contracting and in-house resources | Microsoft | No | Input: natural text in the form of user questions, user uploaded documents. Output: Generated text in the form of answers to user questions, generated answers (summaries/queries) based on user documents. | GPT 4.1 LLM | Yes | Not Publicly Available | k) None of the above | No | Not publicly available | Not Publicly Available | |||||||||||
| Department Of Health And Human Services | HHS/NIH | NICHD RPAB AI/ML Application Referral System | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The primary objective is to improve the efficiency, accuracy, and consistency of grant application referral assignments while streamlining the internal process for referring new applications. The AI-generated output supports subject matter experts by providing additional information that helps them make faster decisions and prioritize applications for review. All AI output is used solely as an assistive tool, and every referral decision undergoes 100% human review. Therefore, this use case does not meet the definition of a high-impact AI. | Classical/Predictive Machine Learning | The primary objective is to enhance the efficiency, accuracy, and consistency of grant application referral assignments, while reducing the burden on Subject Matter Experts in RPAB. The AI system is expected to streamline the process of internal referral of new grant applications. | This AI use case increases the efficiency of the grant referral process and reduces overlapping efforts in grant referral review. | Results are presented as class predictions and class probabilities as recommendations for branch assignment. | 24/08/2026 | b) Developed in-house | Yes | Results are presented as class predictions and class probabilities as recommendations for branch assignment. | NIH IMPAC II funded and unfunded grant application data is used. Unstructured text from project abstract, specific aims, and title are encoded and vectorized for model training and inference. Fiscal year, activity code, and RCDC terms are transformed via one-hot encoding for use in model training and inference. PII related to individuals associated with the grant is kept intact to preserve the integrity of the use case of grant application referral and the trends of researchers' focus on particular scientific areas. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NLP Automated Referral | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | First, the NLP Automated Referral tool produces non-binding recommendations regarding which NIGMS program officer is most appropriate to manage an incoming grant application, along with two alternative suggestions. Program officers must actively accept the assignment or refer the application to a more appropriate program officer, and they have full discretion to ignore or override the AI tools recommendations. The systems output is therefore not a principal basis for any legal or binding decision; it is one of several informational inputs to an internal workflow choice made by human staff. Second, all funding and programmatic decisions are made through established peer review and programmatic processes, governed by existing NIH/NIGMS policies and human judgment. Final funding decisions are made by the NIGMS Director in consultation with the NIGMS Advisory Council, the NIGMS Deputy Director, and the NIGMS Division Directors, not the individual program officers who manage the applications. As a result, the AIs output does not directly affect an individuals or organizations access to Federal funding or other critical government resources or services, nor does it alter anyones legal status or rights. Finally, the NLP Automated Referral tool does not fall into any of the categories of AI use cases identified in Section 6 of M-25-21 that are automatically designated high-impact. It is a routing tool used for internal staff portfolio management. | Classical/Predictive Machine Learning | Referring applications manually is tedious and time consuming. Using an automated approach allows staff to focus their time on more difficult tasks. | Automated referral allows NIGMS to retain institutional referral knowledge by training on historical data, eliminates delays in referral by assigning applications as soon as they come in, and reduces burden on staff members and allows them to allocate more of their time to other high value tasks. | Input: IMPAC II application data, including titles, abstracts, narratives and specific aims. Output: Top three most relevant ICs and POs. | 20/08/2026 | b) Developed in-house | No | Input: IMPAC II application data, including titles, abstracts, narratives and specific aims. Output: Top three most relevant ICs and POs. | All data come from the internal NIH IMPAC II database. | No | k) None of the above | Yes | It is not publicly available. | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | OCIO GenAI Advisor | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | OIT Help Desk Chatbot | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The OIT help desk would like to shorten resolution time for tickets, help empower users, generate a non-trivial response for a complex question in a desired format such as security, architecture, PMO policy, reports. | If used effectively, the OIT help desk can shorten resolution time for tickets, help empower users, generate a non-trivial response for a complex question in a desired format such as security, architecture, PMO policy, reports. | Publicly available help desk data, NIST policy in pdf format as user-provided data. Prompt and output are in natural language. | Publicly available help desk data, NIST policy in pdf format as user-provided data. Prompt and output are in natural language. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Person-level disambiguation for PubMed authors and NIH grant applicants | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A person or entity may use several name variations on publications and/or grants, which causes uncertainty in attributing research contributions | Correct attribution of grants, articles, and other products to individual researchers is critical for high quality person-level analysis. This improved method for disambiguation of authors on articles in PubMed and NIH grant applicants can inform data-driven decision making | Harmonized data | 23/02/2026 | c) Developed with both contracting and in-house resources | Lexical Intelligence, LLC | No | Harmonized data | Biomedical publications and preprints from PubMed and select publicly available preprint servers, grants titles, abstracts and biosketches from IMPACII, and ORCID data | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Portfolio Analysis Summarization Tool (PAST) | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Reduce effort for summarization of custom research portfolios | Rapid summarization of custom research portfolios which will be used to support program staff and others across the institute. | Input: Grants data from QVR. Output: Summaries of grant-related texts. | Input: Grants data from QVR. Output: Summaries of grant-related texts. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Program Classification Coding | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | To aid users, a solution has been developed that suggests the specific PCC that should be assigned to each application. | The Division staff is responsible for handling a large volume of applications during each council round, with the task of processing and assigning PCCs. To aid users, a solution has been developed that suggests the specific PCC that should be assigned to each application. | Model Inputs: Impac II data fields (Specific Aims, Project Title, and Study Section) and Out put is top 3 predicted PCCs. User views list of applications and top 3 suggestions by clicking on a report for the selected council date. Use can filter view by IC/OrgCode and Program Officers names. | Model Inputs: Impac II data fields (Specific Aims, Project Title, and Study Section) and Out put is top 3 predicted PCCs. User views list of applications and top 3 suggestions by clicking on a report for the selected council date. Use can filter view by IC/OrgCode and Program Officers names. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | RCDC AI Validation Tool | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Research Performance Progress Report (RPPR) Report Comparison | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Identification of duplication and overlap in grants year over year. | Research Performance Progress Reports (RPPR) are used by recipients to submit progress reports to NIH on their grant awards. AI can analyze such data to identify duplication and overlap in grants year over year. | Input: Grants data from QVR. Output: Identification of duplication and overlap in grants year over year . | Input: Grants data from QVR. Output: Identification of duplication and overlap in grants year over year . | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Scientific summaries tool | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Enhancing scientific summary development for communicating scientific achievement. | Within NIAID DIR we have team that drafts justifications for personnel actions based on the research being performed. This tool will be created to help them quickly and effectively prepare justifications for personnel actions for investigators in specific research fields. | Inputs: scientific publications, CV/Bib, BSC submissions and outcome memos, prior justifications, and, clinical protocols. Outputs: Scientific summary | Inputs: scientific publications, CV/Bib, BSC submissions and outcome memos, prior justifications, and, clinical protocols. Outputs: Scientific summary | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Similarity-based Application and Investigator Matching (SAIM) | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | SRDMS NLP COI | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Scientific review officers need to identify individuals who may pose a conflict-of-interest (COI) during the grant application review process. | SRDMS NLP COI is able to automate the identification of individuals who may pose a conflict-of-interest during the grant application review process, which saves significant time and effort by the SRO during application review and promotes consistency in identifying COIs. | Tool ingests grant application PDFs from an upstream source system, eRA, and these applications are processed to extract relevant name entities. The tool returns extracted name entities and metadata in a table that are displayed via custom module within the SRDMS application. | 19/01/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Tool ingests grant application PDFs from an upstream source system, eRA, and these applications are processed to extract relevant name entities. The tool returns extracted name entities and metadata in a table that are displayed via custom module within the SRDMS application. | A labeled dataset of grant applications and associated conflicts of interest is used to calculate pipeline evaluation metrics. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Stem Cell Auto Coder | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | not publicly disclosed as an open government data asset | Study Section Clustering Tool (SSCT) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Efficiently organizing grant applications into appropriate study sections based on scientific similarity. | The Study Section Clustering Tool (SSCT) enhances the agencys mission by automating and streamlining the organization of grant applications into scientifically relevant study sections, improving efficiency and reducing manual effort. It ensures applications are reviewed by experts with appropriate expertise, adapting over time to changes in scientific fields through periodic model updates. This data-driven support leads to higher-quality peer review processes, promoting more effective research funding decisions | The AI systems outputs are lists of study sections grouped together based on the scientific similarity of grant application texts. Specifically, it generates clusters of applications that should be reviewed collectively because they share related scientific topics. These groupings serve as recommendations to subject matter experts, who use them to finalize the organization of study sections for peer review panels. | 23/01/2026 | b) Developed in-house | Yes | The AI systems outputs are lists of study sections grouped together based on the scientific similarity of grant application texts. Specifically, it generates clusters of applications that should be reviewed collectively because they share related scientific topics. These groupings serve as recommendations to subject matter experts, who use them to finalize the organization of study sections for peer review panels. | text of grant applications submitted to the Center for Scientific Review (CSR). | not publicly disclosed as an open government data asset | No | It does not have a standalone publicly available Privacy Impact Assessment (PIA). However, it operates within the Center for Scientific Review General Support System (CSR GSS), which is a FISMA-reportable system that has an associated PIA covering the overall system environment where the AI tool functions. | k) None of the above | Yes | NO | It does not have a standalone publicly available Privacy Impact Assessment (PIA). However, it operates within the Center for Scientific Review General Support System (CSR GSS), which is a FISMA-reportable system that has an associated PIA covering the overall system environment where the AI tool functions. | ||||||||||
| Department Of Health And Human Services | HHS/NIH | Synonymy prediction in the UMLS Metathesaurus | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | TB Case Browser Image Text Detection | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Without OCR technology, validation of text on images is manually intensive and, without proper controls, can create the risk of sensitive information coming into the system. | Provides additional protection against PII/PHI ingress into the TB Portals imaging dataset in a far more automated process. | User uploads a DICOM image, which is converted and passed to the AWS Rekognition service. Output is a JSON block with predictions on the location of text within an image. | 19/01/2026 | c) Developed with both contracting and in-house resources | Deloitte, Guidehouse, Research Data and Communication Technologies Corp. | Yes | User uploads a DICOM image, which is converted and passed to the AWS Rekognition service. Output is a JSON block with predictions on the location of text within an image. | Existing TB Portals images with and without text are used to evaluate performance. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Tool for PO Lookup Assignment (TPAL) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This tool helps NIGMS program staff determine the most appropriate SME at NIGMS and/or the most appropriate IC for a research proposal. | There are many occasions in which Division Directors, Branch Chiefs, and Program Officers wish to receive suggestions for the most appropriate people to talk to about a project proposal or where to send a proposal that might not be appropriate for NIGMS. | Input: Free form text in an online textbox. Output: Top three most relevant ICs and POs and their probabilities. | 20/07/2026 | b) Developed in-house | No | Input: Free form text in an online textbox. Output: Top three most relevant ICs and POs and their probabilities. | All data come from the internal NIH IMPAC II database. | No | k) None of the above | Yes | It is not publicly available. | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Transformative Research Award Anonymization Check (TRAAC) | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Machine learning pipeline for mining citations from full-text scientific articles | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Machine learning system to predict translational progress in biomedical research | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | AI Software for Conference/Workshop Summaries | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | CylanceProtect | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NIH Grants Virtual Assistant | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Provides easy access to grants information and policies. | Chat Bot to assist users in finding grant related information. | The inputs include information available on the NIH Grants and Funding site, such as FAQs. The outputs are answers to questions/prompts provided by the user. | 20/05/2026 | a) Purchased from a vendor | Yes | The inputs include information available on the NIH Grants and Funding site, such as FAQs. The outputs are answers to questions/prompts provided by the user. | Website data and manual. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Open AI SharePoint Document Assistant | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Splunk IT System Monitoring Software | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Aivia | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Provides machine-learning-based object classification in microscopy applications | Provides AI-based segmentation, enhancement, and prediction in microscopy applications. This tool is utilized in support of research projects within NIEHS DIR and DTT laboratories. | Inputs are microscopy images, outputs are object and structure annotations and enhanced images. | 23/01/2026 | a) Purchased from a vendor | Leica | No | Inputs are microscopy images, outputs are object and structure annotations and enhanced images. | Training data is generated by use microscopy user | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Alphafold | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Prediction of atomic models from CryoEM specimens. | This tool uses ML to build de novo atomic models for proteins based on amino acid sequence alone. This tool is utilized by the Cryo-EM core in support of research projects within NIEHS DIR and DTT laboratories. | Inputs are amino acid sequences, outputs are atomic models. | 23/01/2026 | a) Purchased from a vendor | Google DeepMind | No | Inputs are amino acid sequences, outputs are atomic models. | Models were trained using publicly available data. | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Automated approaches to analyzing scientific topics | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Slow and inefficient means by which decision makers are able to evaluate portfolios | Assist decision makers in analyzing topics in their portfolios | Recommendation | 23/02/2026 | c) Developed with both contracting and in-house resources | Lexical Intelligence, LLC | No | Recommendation | Biomedical publications and preprints from PubMed and select publicly available preprint servers, grants titles and abstracts from IMPACII, and patent data from USPTO | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | cryoDGRN | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Prediction of protein flexibility and variability | Software that is used to evaluate protein flexibility and variability in the dataset (https://github.com/zhonge/cryodrgn). This tool is utilized by the Cryo-EM core in support of research projects within NIEHS DIR and DTT laboratories. | Input is cryoEM imaging data, outputs are protein flexibility and variability. | 22/01/2026 | a) Purchased from a vendor | Open Source | No | Input is cryoEM imaging data, outputs are protein flexibility and variability. | Models were trained using publicly available data. | No | k) None of the above | No | https://github.com/zhonge/cryodrgn | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | crYOLO | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Automated screening of CryoEM specimens | This tool is an open-source machine learning-based particle picker (https://cryolo.readthedocs.io/en/stable/). This tool automatically picks targets based on its general model or an adapted model using small number of manually selected particles. It is utilized by the Cryo-EM core in support of research projects within NIEHS DIR and DTT laboratories. | Inputs are low magnification cryoEM imaging data and, optionally, manually selected targets; outputs are automatically selected imaging targets. | 21/01/2026 | a) Purchased from a vendor | Open Source | No | Inputs are low magnification cryoEM imaging data and, optionally, manually selected targets; outputs are automatically selected imaging targets. | Models were initially trained using publicly available data, further training may be performed using data set being analyzed. | No | k) None of the above | No | https://cryolo.readthedocs.io/en/stable/ | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | CryoSPARC | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Prediction of protein flexibility and variability | Proprietary software that is used for cryoEM data processing. Some steps in the workflow use ML to evaluate the protein flexibility and variability in the dataset (https://cryosparc.com/). This tool is utilized by the Cryo-EM core in support of research projects within NIEHS DIR and DTT laboratories. | Deep learning - stochastic gradient descent | 22/05/2026 | a) Purchased from a vendor | Structura Biotechnology | No | Deep learning - stochastic gradient descent | Training data owned by commercial software developer. | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | DeepEMhancer | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Other | Automated screening of CryoEM specimens | Open-source software that is used to obtain final "sharpened" cryoEM maps (https://github.com/rsanchezgarc/deepEMhancer). This algorithm uses a ML model to estimate the noise in the model and refine the local areas of the map and is utilized by the Cryo-EM core in support of research projects within NIEHS DIR and DTT laboratories. | Input is cryoEM imaging data, outputs are enhanced images. | 22/01/2026 | a) Purchased from a vendor | Open Source | No | Input is cryoEM imaging data, outputs are enhanced images. | Models were trained using publicly available data. | No | k) None of the above | No | https://github.com/rsanchezgarc/deepEMhancer | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | Dual Use Research of Concern (DURC) Categorization LLM | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | FIJI | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Provides machine-learning-based object classification in microscopy applications | Provides AI-based segmentation, enhancement, and prediction in microscopy applications. This tool is utilized in support of research projects within NIEHS DIR and DTT laboratories. | Inputs are microscopy images, outputs are object and structure annotations and enhanced images. | 23/01/2026 | a) Purchased from a vendor | Open Source | No | Inputs are microscopy images, outputs are object and structure annotations and enhanced images. | Training data is generated by use microscopy user | No | k) None of the above | No | https://github.com/juglab/labkit-ui | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | Identification of emerging areas | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identification of the rate of progress across scientific fields to inform data-driven decision making | Help decision makers to target new investments to topics with the greatest potential to accelerate scientific progress | The rate of progress across scientific fields | 23/02/2026 | c) Developed with both contracting and in-house resources | Lexical Intelligence, LLC | No | The rate of progress across scientific fields | Citation data for publicly available biomedical publications | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Imaris | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Provides machine-learning-based object classification in microscopy applications | Provides machine-learning-based object classification in microscopy applications. This tool is utilized in support of research projects within NIEHS DIR and DTT laboratories. | Inputs are microscopy images, outputs are object and structure annotations. | 23/01/2026 | a) Purchased from a vendor | Andor | No | Inputs are microscopy images, outputs are object and structure annotations. | Training data is generated by use microscopy user | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | ModelAngelo | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Prediction of atomic models from CryoEM specimens. | This tool uses ML to build atomic models in cryoEM maps, with or without amino acid sequence input. This tool is utilized by the Cryo-EM core in support of research projects within NIEHS DIR and DTT laboratories. | Inputs are cryoEM imaging data and optionally amino acid sequence, outputs are atomic models. | 22/01/2026 | a) Purchased from a vendor | Open Source | No | Inputs are cryoEM imaging data and optionally amino acid sequence, outputs are atomic models. | Models were trained using publicly available data. | No | k) None of the above | No | https://github.com/3dem/model-angelo | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | NIGMS Azure Open AI | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | There are a number of situations in which administrative activities can be augmented by generative AI, especially when classification of documents is needed but no training data exist. | We are using large language models (LLMs), in particular Open AI's models, for business process improvement. We have used these models for visualization of a grant portfolios as well as numerous classification problems: IC prediction, clinical trial prediction, research area prediction, etc. These models allow us to classify documents through simple prompt engineering rather than the laborious process of creating a custom training set from scratch. These models also allow us to reduce the number of applications that humans need to review from tens of thousands of applications to mere hundreds or fewer for a number of tasks. | Input: Text from various components of NIH grant applications. Output: Open AI chat completions (text) or text embeddings (vectors of numbers). | 24/09/2026 | b) Developed in-house | No | Input: Text from various components of NIH grant applications. Output: Open AI chat completions (text) or text embeddings (vectors of numbers). | All data used for this project is internal to NIH, mostly administrative data from the NIH IMPAC II database. | No | k) None of the above | Yes | It is not publicly available. | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Pangolin Lineage Classification of SARS-CoV-2 Genome Sequences | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Process a high volume of sequence data in real time to identify meaningful mutational patterns while minimizing the need for human effort. | Improved user retrieval of SARS-CoV-2 genome sequences based on classification and tracking of specific lineages, including those associated with mutations that may decrease effectiveness of therapeutics or protection provided by vaccination.? | Lineage classification identifiers for sequences | 21/04/2026 | a) Purchased from a vendor | http://cov-lineages.org; https://pangolin.cog-uk.io/ | No | Lineage classification identifiers for sequences | Publicly available SARS-CoV-2 sequence data from the GenBank resource was used to develop the tool | No | k) None of the above | No | https://github.com/cov-lineages/pango-designation | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | Prediction of protein 3D structures | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Prediction of transformative breakthroughs | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Slow and inefficient identification of topics likely to produce a scientific breakthrough | Predicting discoveries that are likely to be transformative breakthroughs in science can improve data-driven decision making | Prediction of discoveries that are likely to be transformative breakthroughs in science | 23/02/2026 | c) Developed with both contracting and in-house resources | Lexical Intelligence, LLC | No | Prediction of discoveries that are likely to be transformative breakthroughs in science | PubMed database | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Ptolemy | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Automated screening of CryoEM specimens | Algorithm used to find and classify areas in low magnification CryoEM images for imaging (https://github.com/SMLC-NYSBC/ptolemy). This tool is utilized by the Cryo-EM core in support of research projects within NIEHS DIR and DTT laboratories. | Inputs are low magnification cryoEM imaging data, outputs are specimen classifications. | 23/01/2026 | a) Purchased from a vendor | Open Source | No | Inputs are low magnification cryoEM imaging data, outputs are specimen classifications. | Models were trained using publicly available data. | No | k) None of the above | No | https://github.com/SMLC-NYSBC/ptolemy | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | Research Area Tracking Tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Analysis staff needed assistance identifying research areas associated with individual projects. | Within NIAID we have team that codes grants based on the research being proposed. They also prepare reports for high priority research areas. This tool was created to help them quickly identify projects that fall into a specific research field. | Grant title and abstract. Probability Estimates. | 20/01/2026 | c) Developed with both contracting and in-house resources | No | Grant title and abstract. Probability Estimates. | Grant title and abstract and coding. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Systematic investigation of the National Human Genome Research Institute History of Genomics and the Human Genome Project Archive | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | TB DEPOT (Tuberculosis Data Exploration Portal) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | There was a lack of deidentified, multidimensional tuberculosis socioeconomic, clinical, imaging, and pathogen genomic data available for researchers to use in developing models and learning more about TB cases. | Provide a no-code application for users to explore and analyze multidimensional TB Portals data. | User selects "cohorts" of tuberculosis cases from TB Portals containing structured clinical/socioeconomic, pathogen genomic, and imaging data for analysis as inputs. The outputs include confusion matrices, cohort comparisons, and visualizations like feature importance in the model. Outputs are available within the application and via an API. | 19/01/2026 | c) Developed with both contracting and in-house resources | Deloitte, Guidehouse, Research Data and Communication Technologies Corp. | Yes | User selects "cohorts" of tuberculosis cases from TB Portals containing structured clinical/socioeconomic, pathogen genomic, and imaging data for analysis as inputs. The outputs include confusion matrices, cohort comparisons, and visualizations like feature importance in the model. Outputs are available within the application and via an API. | TB Portals data is used to train, fine tune and evaluate performance of the model. | No | b) Sex | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | TB Portals Outlier Detection Lambda Function | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | The quality of chest X-ray (CXR) images uploaded by TB Portals Program partners varied significantly, and as the scale of images increased, NIAID needed a way to identify outliers in the imaging dataset. | Detect potential low-quality chest X Rays to flag as potentially being unsuitable for AI/ML training and flag for quality improvement. | Input: Dicom file. Output: classification of Outlier or not Outlier via the model. | 21/01/2026 | c) Developed with both contracting and in-house resources | Deloitte, Guidehouse, Research Data and Communication Technologies Corp. | Yes | Input: Dicom file. Output: classification of Outlier or not Outlier via the model. | Existing TB Portals images are used to train, fine tune and evaluate performance of the model. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Topaz | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Automated screening of CryoEM specimens | This tool is an open-source machine learning-based particle picker (https://github.com/tbepler/topaz). This tool automatically picks targets given a small number of manually selected particles. It is utilized by the Cryo-EM core in support of research projects within NIEHS DIR and DTT laboratories.. | Inputs are low magnification cryoEM imaging data and manually selected targets; outputs are automatically selected imaging targets. | 21/01/2026 | a) Purchased from a vendor | Open Source | No | Inputs are low magnification cryoEM imaging data and manually selected targets; outputs are automatically selected imaging targets. | Models were initially trained using publicly available data, further training is performed using data set being analyzed. | No | k) None of the above | No | https://github.com/tbepler/topaz | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | Protein Modeling with AlphaFold | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Generative AI | Protein modeling | Enormous leap forward in speed and accuracy in predicting protein folds and complexes. | The output is a series of protein structure files and related confidence/quality scores and metrics for each structure file. | 20/01/2026 | a) Purchased from a vendor | No | The output is a series of protein structure files and related confidence/quality scores and metrics for each structure file. | No | k) None of the above | No | https://deepmind.google/science/alphafold/ | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | eSlate Bot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | This tool answers questions using genAI by pulling information directly from the eSlate nomination packages. | CSR leadership can make quick decisions about slate approval and make any further improvements to slate review process | This tool outputs answers to questions about the quality of study section nomination slates of standing study section members, and if there are potential issues, allows the user to more closely examine the content of the nomination slate directly. | 25/01/2026 | b) Developed in-house | Yes | This tool outputs answers to questions about the quality of study section nomination slates of standing study section members, and if there are potential issues, allows the user to more closely examine the content of the nomination slate directly. | CSR slates by year, employee structures | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | SRO Handbook Bot | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This tool answers Scientific Review Officer (SRO) questions by retrieving relevant information from the handbook, saving the user time looking for answers to their questions. | The system will help SROs perform policy and handbook searches based on policy numbers and related keywords, using semantic understanding to improve search accuracy. | Provides summarized search results. | Provides summarized search results. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Detection/Identification of Reviewer Expertise and Grant Application Content | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is designed to detect and identify grant application content and reviewer expertise such as pediatric or other domains to support accurate matching and more efficient, informed decision-making | The tool is expected to enhance the agencys mission by improving the accuracy, fairness, and efficiency of processes such as reviewer assignment and grant application analysis. By automating the detection of relevant content and expertise, it reduces manual workload, ensures better alignment between reviewers and applications, and supports more informed decision-making. This leads to more equitable and effective funding outcomes, ultimately benefiting the general public through improved support for research and programs that address critical needs. | The AI systems outputs are classifications or labels indicating whether a grant application involves specific content areas (e.g., pediatric or other domains) and assessments of reviewer expertise based on biosketches, publications, and related data. These outputs are used to support accurate matching between applications and qualified reviewers. | 25/07/2026 | b) Developed in-house | Yes | The AI systems outputs are classifications or labels indicating whether a grant application involves specific content areas (e.g., pediatric or other domains) and assessments of reviewer expertise based on biosketches, publications, and related data. These outputs are used to support accurate matching between applications and qualified reviewers. | This AI use case does not involve training or fine-tuning models. Instead, it uses predefined rules and prompts to analyze existing grant application texts and reviewer information to identify relevant content and expertise. Evaluation is based on validating the accuracy of these prompt-based classifications against known examples. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Detection of AI-Generated Reviewer Critiques | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This tool detects if a given written critique was generated by AI tools, such as large language models (LLMs), as the reviewers are not allowed to write their critiques using these AI assistants. Flagged critiques are examined by NIH staff and discussed with the reviewers. | NIH policy mandates that reviewers produce independent analyses of grant applications based on their expertise and knowledge. The use of genAI to produce a written critique of a grant application is in violation of NIH policy and fails to provide an independent assessment of the application. | This tool outputs classifications of whether the reviewer critique was likely produced by Generative AI or not. | 20/01/2026 | b) Developed in-house | Yes | This tool outputs classifications of whether the reviewer critique was likely produced by Generative AI or not. | critiques written by the reviewers, AI-generated critiques, Amazon book reviews, and AI-generated book reviews | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | MirBot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI tool addresses the challenge of efficiently and consistently processing incoming study PDFs by automatically extracting key information needed for grant review workflows. | The AI tool helps maintain continuity in the grant review process by ensuring consistent extraction and presentation of key study information, even as experienced staff retire or transition. This reduces the training burden on new team members and preserves institutional knowledge through standardized workflows. | The system produces an indexed and preprocessed version of each submitted PDF, enabling context-aware question-answering by the AI. A predefined set of questions is automatically asked to extract relevant information from the document, which is then used to populate the grant prior approval review form details for the associated study. | 24/10/2026 | c) Developed with both contracting and in-house resources | Technatomy Axle | No | The system produces an indexed and preprocessed version of each submitted PDF, enabling context-aware question-answering by the AI. A predefined set of questions is automatically asked to extract relevant information from the document, which is then used to populate the grant prior approval review form details for the associated study. | The AI was trained with existing study PDFs to help the AI properly identify the structure of the PDF files. No PHI from the studies were in these documents, just details about the study and request details. | Yes | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Notebooks Hub | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Helping scientists and researchers code their applications, dashboards, and data analyses more quickly, have higher code standards, and gain more insight from their data | More time can be spent on the subject matter exploration and understanding than on coding and analysis frameworks that help/support these goals. Current assessments indicate 40% speed increases to develop code and applications, with improvements increasing year-over-year | Python, R, Javascript, Java, etc. code | 25/08/2026 | c) Developed with both contracting and in-house resources | Axle Informatics, Microsoft, OpenAI | No | Python, R, Javascript, Java, etc. code | NA - Commercial or open source models used. Not trained in house. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Ask Aithena | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Helps researchers stay on top of the latest research in their field and quickly get up to speed in new fields by providing conversational answers while also providing verifiable references for the user to read primary sources. | Increased productivity of researchers due to more time spent doing science and less time researching about other people's science | Text blurbs answering the user's questions and the references used to answer those questions. | 23/07/2026 | c) Developed with both contracting and in-house resources | Axle Informatics, Microsoft, OpenAI | No | Text blurbs answering the user's questions and the references used to answer those questions. | NA - Commercial or open source models used. Not trained in house. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Writing code using AI | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Limited users able to develop code. | Assistance with novice users writing code to achieve data transformations. | recommended pyspark code | 24/06/2026 | a) Purchased from a vendor | Palantir Technologies deployed application within Foundry | Yes | recommended pyspark code | no agency data provided | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | ChIRP - A ChatGPT Model for the NIH Intramural Community | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Generative AI | We hoped to load text into the chatbot to assist in summarization and identification of prominent themes. ChIRP was moderately helpful for our use case, primarily because we used it at a stage in development where documents could not be directly uploaded into the chatbot | Chatbots with the ability to scan, summarize, and compare document text could likely assist researchers working with large qualitative datasets. | We hoped to use ChIRP to thematically analyze text. | 25/02/2026 | b) Developed in-house | Yes | We hoped to use ChIRP to thematically analyze text. | unknown | No | unknown | k) None of the above | Yes | Not available | unknown | ||||||||||||
| Department Of Health And Human Services | HHS/NIH | Software Approval Agent | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | All software requests currently go to the IT Service Desk, who then need to research the approval status, respond to the end user, and forward the request to other offices in order to process, respond, and route the request to the ISSO or Administrative staff. The agent will help route software requests directly to the correct parties based on the approval status of the requested software. | Helps staff follow standard policies & procedures thereby improving operational efficiencies. | AI system provides user with the status of requested software (Approved for General Use, Approved but Requires Purchase, Approved for Special Use Only, Not Approved, Not Found in Catalog) and then generates a IT service request routed to the correct recipient(s) with the information necessary to process the request. | AI system provides user with the status of requested software (Approved for General Use, Approved but Requires Purchase, Approved for Special Use Only, Not Approved, Not Found in Catalog) and then generates a IT service request routed to the correct recipient(s) with the information necessary to process the request. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | LibreChat | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The NHGRI-internal generative chat and image capabilities of LibreChat will provide an alternative to using publicly available chat services that may expose NIH data. Prompt data is stored within NHGRI's system boundaries and will not be used to train public models. | It will allow NHGRI staff to utilize chat and image generative AI services using available LLMs from various CSPs through a single interface without exposing NHGRI data to train public models. | AI system outputs are in response to user prompts. The LLMs utilized have the ability to recommend (based on prompted preferences), formulate content, and inform decisions based on the details of the prompts. | AI system outputs are in response to user prompts. The LLMs utilized have the ability to recommend (based on prompted preferences), formulate content, and inform decisions based on the details of the prompts. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Ethics AI Agent | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | Users repeatedly ask Ethics staff the same type of question, which causes challenges for Ethics staff to manage their tasks effectively. | Helps staff follow standard policies & procedures thereby improving operational efficiencies. | The AI system outputs recommendations based on the Ethics knowledgebase, with links to source references. In addition, if staff choose to consult with Ethics staff, it sends a request to the Ethics team with the conversation history included. | The AI system outputs recommendations based on the Ethics knowledgebase, with links to source references. In addition, if staff choose to consult with Ethics staff, it sends a request to the Ethics team with the conversation history included. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Help Desk AI Assistant | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | Help Desk staff need to write new responses to IT service desk requests and inquiries. The agent will provide Help Desk staff with ideal language when responding to tickets. | Help IT staff communicate with end user in a clear, consistent message to improve customer service quality and efficiencies. | The AI agent provides responses to IT service desk tickets that staff can copy & paste into ServiceNow fields. | The AI agent provides responses to IT service desk tickets that staff can copy & paste into ServiceNow fields. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Data Management and Sharing Plans - Assistant Review Tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | This tool uses NLP to identify the areas in the grant application that deal with the Data Management and Sharing Plan (DMSP). It is designed to assist the reviewer in determining whether the application meets the NIH Data Management and Sharing requirement. | This tool improves productivity and efficiency by streamlining the process and pre-processing the DMS plan against a checklist. It assists the reviewer in their task. The reviewer makes the final determination based on the results and the text of the application. | The tool provides answers for the DMS plan PO checklist. | 24/10/2026 | b) Developed in-house | No | The tool provides answers for the DMS plan PO checklist. | Leverages NIH application data from IRDB. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | ITAC SOP Chat | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | The AI provides natural language searching of ITAC SOP, policy, and as-built documents allowing users to easily locate knowledge and easily view the document section that includes the knowledge. | The AI will increase operational efficiency by allowing ITAC users to more easily search and locate knowledge from SOP, policy, and As-Built documents. | The AI outputs natural language responses based on knowledge from ITAC SOP, policy, and As-Built documents. It also outputs citations with a built-in reader to allow users to view document sections where the knowledge was found. | 25/07/2026 | b) Developed in-house | No | The AI outputs natural language responses based on knowledge from ITAC SOP, policy, and As-Built documents. It also outputs citations with a built-in reader to allow users to view document sections where the knowledge was found. | The AI uses a retrieval augmented generation architecture to retrieve relevant document sections and ground the generative responses. Documents are indexed with cognitive search and stored in blob storage. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | BDC Website Search | a) Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI provides natural language searching of BDC website information, making it easier for users to find relevant information. | The AI will provide a more user friendly and efficient information search system. | The AI outputs natural language search responses using BDC website information. The AI also outputs citations for retrieved information. | The AI outputs natural language search responses using BDC website information. The AI also outputs citations for retrieved information. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | BioData Catalyst Harmonized Data Model | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | Harmonizing complex scientific concepts in retrospectively collected data. | The AI will help lower the barrier to use and improve the quality of biomedical data for research to improve public health outcomes. | Harmonized, AI-ready data. | Harmonized, AI-ready data. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NHLBI Chat Workflow - Data Management and Sharing Plan | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | Assists program officers in reviewing grant applications for a specific query in the checklist (DMSP). | Improved efficiency in workflow for Program officers | Recommendations | 25/06/2026 | b) Developed in-house | Yes | Recommendations | N/A. We use commercial models available via Microsoft Azure, through an NIH STRIDES environment. | No | k) None of the above | Yes | https://github.com/NHLBI/LLM_Chat_Interface | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NHLBI Chat Workflow - Foreign Component | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | Assists program officers in reviewing grant applications for a specific query in the checklist (Foreign Component). | Improved efficiency in workflow for Program officers | Recommendation | 25/06/2026 | b) Developed in-house | Yes | Recommendation | N/A. We use commercial models available via Microsoft Azure, through an NIH STRIDES environment. | No | k) None of the above | Yes | https://github.com/NHLBI/LLM_Chat_Interface | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Merops | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Automate and expedite copyediting of scientific manuscripts | More efficient and cost-effective source of copy-editing manuscripts. | Recommendation | 25/07/2026 | a) Purchased from a vendor | Shabash | No | Recommendation | The software is proprietary and cannot be trained; it is, however, highly customizable by the end user (e.g., NIAAA staff) to accommodate the journals specific style preferences. | Yes | k) None of the above | No | https://shabash.net/merops/ | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | OSPIDA RPAB Scientific Coding Assistance Tool (CAT) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | OSPIDA's Referral Program and Analysis Branch (RPAB) is responsible for scientific coding assignments of grant applications. They assign scientific codes to grants based on different objectives, selecting from a list of over 3,000 scientific codes related to NIAID's research areas. | RPAB's Scientific CAT will significantly save time spent on the initial manual assignment of scientific codes and promote consistency in coding across applications. | The models output scientific code predictions by objective, which are displayed in the SCORS application used by RPAB to review grants. | 25/01/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | The models output scientific code predictions by objective, which are displayed in the SCORS application used by RPAB to review grants. | Multiple years of grant data and RPAB scientific coding assignment data were used to train the NLP models, a different set of grant data and RPAB sicnetific coding assignment data were used to evaluate model metrics from the trained NLP models. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | AWS Exscribo | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | Typical transcription services struggle to handle scientific and biomedical terminology, and many platforms do not include the ability to pretrain for specific words, or offer the ability to perform queries on that text in the transcription system itself. | The AWS Exscribo application is specifically built to allow for custom vocabularies, editing of meeting transcriptions, and model retraining, making it well-suited to handle complex medical terminologies. It also has the benefit of using AWS Bedrock, which enables Gen AI prompting on the transcription. | The AI systems output includes transcriptions of audio recordings, and outputs of results from Gen AI prompts on those transcriptions. | 25/01/2026 | c) Developed with both contracting and in-house resources | Deloitte, AWS | No | The AI systems output includes transcriptions of audio recordings, and outputs of results from Gen AI prompts on those transcriptions. | Custom Vocabularies of words or acronyms that are important to conversations, and any edits that are made to the output of the transcription that are used for training. | No | k) None of the above | Yes | https://github.com/aws-samples/sample-scientific-meeting-transcription | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | Virtual Assistant for the NIAMS Grant Management Applications | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | New investigators and staff often struggle to locate policies, workflows, and troubleshooting steps buried across SharePoint guides, SOP documents, and in?app help screens for the two NIAMS Grants Management applications. In addition, support tickets and emails consume significant SME staff's time. | Streamlined Application Support Operations: The AI-powered virtual assistant can handle user questions and inquiries, reducing the burden and support time spent by grant SMEs and the IT application development team. It provides self-service options for users to resolve issues independently. | Input : NIAMS Grant Management Training material and Job aids such as SOPs, user guides, and job aids. Output : Responses to end user questions with citation from user guides, training materials, job aids, and SOPs. | Input : NIAMS Grant Management Training material and Job aids such as SOPs, user guides, and job aids. Output : Responses to end user questions with citation from user guides, training materials, job aids, and SOPs. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Grant award process efficiency | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | Reducing time burden of repetitive tasks related to the grants award process | Improvement of operational efficiency for making grant awards | Highlight and summary of information found in grant-related documents | 25/01/2026 | c) Developed with both contracting and in-house resources | No | Highlight and summary of information found in grant-related documents | Grant-related documents | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Coding translation | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Translates code between programming languages, e.g. rewrite python script in equivalent R commands; translate natural language to code, e.g. write a python script that will convert this gene matrix to a transcript matrix etc. | Saves time in troubleshooting programming code or rewriting code in a different language | Output is computer code in designated programming language. | Output is computer code in designated programming language. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Assist in research on the association between hearing loss and dementia for the NIDCD EARssentials hot topic presentation | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Assist in deepening my understanding the statistical methodologies applied in research examining the association between hearing loss and dementia. | Improving understanding of statistical methods in population research of hearing loss and dementia. | Explanation of the concepts and methodologies used in population-based study. | Explanation of the concepts and methodologies used in population-based study. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | AI-generated podcast describing recent advances in methods for analysis of RNA-seq data | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Genomic analysis is a rapidly evolving field with advances in analytical methods arising frequently. This makes it difficult to stay on top of new techniques. | Improved understanding of new genomic analytical methods | An AI-generated podcast episode synthesizing information from several recent publications highlighting new methods for genomic analysis | An AI-generated podcast episode synthesizing information from several recent publications highlighting new methods for genomic analysis | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | AI-Enhanced Journal Clubs | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | A lack of engagement, efficiency, and productivity at journal clubs due to the burden of gathering topically relevant and high quality journal articles for review. | Make journal clubs more engaging and informative with highly relevant articles recommended by the AI. | Recommended journal articles | Recommended journal articles | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Grants Portfolio Analysis | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Facilitate scientific program management and grant portfolio assessment | Better monitoring of grants for alignment with current policies and regulations. | Summaries of sections of grant applications and progress reports. Categorization of grants and applications. | Summaries of sections of grant applications and progress reports. Categorization of grants and applications. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Identifying pain vs non-pain research | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Identifying pain research studies is complicated because of the ubiquity of pain in language. NIH program staff working in the pain research field need to be able to distinguish pain research from opioid research, both of which get classified as "Pain Research" by RCDC. This usually requires significant investment of staff time, however, since a large enough training dataset has been developed we are trying to train various ML algorithms to be able to identify grants that are truly researching pain compared to grants that merely mention pain in their text. | It is expected that this will decrease the amount of staff time needed to curate a portfolio as starting point of analyses and that it will speed up the time it takes to carryout an analysis. | Expected output is a file with a list of grants that were identified as pain related. | Expected output is a file with a list of grants that were identified as pain related. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | HEAL Portfolio Topic Analysis | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The HEAL Initiative supports hundreds of pain research studies. It is meant to support pain research that cannot be carried out by an individual IC. One challenge we have is describing what HEAL research is. This project is an attempt to give program staff a starting point and visualization that describes the topics covered by HEAL research so that staff can start to understand the HEAL portfolio and how the different programs in HEAL are related or different. It will also allow staff to be able to explain how the HEAL portfolio is different from portfolios of different ICs. It may be possible for staff to carry out this analysis but using a ML algorithm would require a fraction of the staff time needed for this and be more consistent. While it may be possible for staff to classify the topics in the relatively small HEAL portfolio without AI, having a ML method in place will allow a single staff member to compare the HEAL portfolio to the much larger NIH portfolio using consistent methods. | This will allow us to describe various portfolios that consist of 100s or 1000s of grants without requiring the time of dozens of staff members. The output will allow staff members to communicate summaries of their porfolios consistently and clearly. | Network diagram showing different topics of research in the HEAL portfolio, how "related" these topics are to eachother and how many grants fall into each topic. | Network diagram showing different topics of research in the HEAL portfolio, how "related" these topics are to eachother and how many grants fall into each topic. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Determining how federal pain resarch has responded to the Federal Pain Research Strategy. | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The IPRCC has asked NIH to evaluate if all FPRS priorities are being addressed by NIH research. The task would require staff to categorize over 6000 grants into up to 13 FPRS priority areas. A single staff member can curate 5 grants per hour using these parameters, and agreement among staff members is approximately 30%. Therefore, we are carrying out this project using ML to be able to complete the project in a timely manner and without the need of significant staff time investment. It will identify areas of research that require staff to investigate further, instead of having staff curate the whole portfolio. | Allow staff to complete the analysis requested by the IPRCC without the need for grant by grant curation of the entire Federal Pain Research Portfolio | Expected outpus is a csv file that lists all Federally funded pain research grants and assigns them probabliltiies of addressing each of the 13 FPRS prioritites. | Expected outpus is a csv file that lists all Federally funded pain research grants and assigns them probabliltiies of addressing each of the 13 FPRS prioritites. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Transformer-Based Metadata Alignment Workflow | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Inconsistent data elements and differing definitions in glossaries of metadata structures across research data ecosystems hinder interoperability and FAIR-aligned reuse. | Speed improvements in metadata harmonization across ecosystems, enabling more discoverable, reusable, and interoperable datasets to support secondary research, cross-program analysis, and interdisciplinary biomedical discovery. Enhances readiness for large-scale AI/ML applications by providing scalable semantic alignment capabilities and strengthening metadata infrastructure. | Two parallel outputs: (1) Ranked variable pairs using semantic similarity scores generated by transformer-based embeddings (MiniLM, MPNet); (2) GPT-based similarity scores with accompanying natural language justifications derived from semantic evaluation of metadata descriptions. | Two parallel outputs: (1) Ranked variable pairs using semantic similarity scores generated by transformer-based embeddings (MiniLM, MPNet); (2) GPT-based similarity scores with accompanying natural language justifications derived from semantic evaluation of metadata descriptions. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | LLM-Assisted Referral Justification Email Generator | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Referral justifications are often time-consuming to draft manually and require consistent interpretation of referral guidelines and application content. | Reduces Program Officer workload by generating structured draft justifications grounded in referral guidelines. Promotes consistency, transparency, and standardization in referral workflows while improving efficiency and decision traceability. Supports broader adoption of AI in operational decision support with human-in-the-loop oversight. | Structured draft referral justifications that cite relevant referral guidelines and assess alignment between guideline content and the applications title, abstract, and specific aims. | Structured draft referral justifications that cite relevant referral guidelines and assess alignment between guideline content and the applications title, abstract, and specific aims. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | AI Assist | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This solution is designed to address the challenges of manually processing and analyzing large volumes of unstructured dataincluding documents, reports, and policieswhich is often time-consuming, labor-intensive, and prone to human error. | The implementation of an AI-powered analytics solution will streamline data processing, reduce human error, and enable faster, more accurate decision-making at NINDS. This will enhance operational efficiency, strengthen compliance, and allow staff to focus on higher-value tasks. | Text summary, document comparison, keyword extraction, assistance with writing, filling out forms, creating presentations, spreadsheets | Text summary, document comparison, keyword extraction, assistance with writing, filling out forms, creating presentations, spreadsheets | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Portfolio analysis and grant summarizing program | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Extracts specific information from hundreds of grant applications in a consistent reproducable way | This AI program automates the process of analyzing text from a few to many hundreds of grants using a custom prompt and an Azure OpenAI models | It dynamically identifies columns containing text for analysis, intelligently groups related data, and sends it to the AI for processing, one grant at a time and ensures reproducible output. The results are then saved back to a new Excel file, providing a clear and auditable trail of the analysis. This tool allows for AI assisted portfolio analysis. | It dynamically identifies columns containing text for analysis, intelligently groups related data, and sends it to the AI for processing, one grant at a time and ensures reproducible output. The results are then saved back to a new Excel file, providing a clear and auditable trail of the analysis. This tool allows for AI assisted portfolio analysis. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | RAG System For Travel Planning | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The chatbot answers questions from the users based on the user guide and potentially categorizing the IT helpdesk tickets. | Instant answers to common travel questions and reduce hours spent manually searching through the user guide | Recommendations to users on filling out travel documentation. | Recommendations to users on filling out travel documentation. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Generating Metadata for Web Archive Resources | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Time-intensive process needed to generate accurate and descriptive metadata about web resources | More consistent, reliable, accurate and publicly available information about resources | Draft metadata about archived web resources. | Draft metadata about archived web resources. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Incorporating External Information into Taxonomy | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Time consuming and inefficient process of reading journal articles and manually copying new organism names from those articles into internal databases | Reduced time spent manually reading journal articles to find novel organism names and manually entering those names into internal database | Spreadsheet of new organism names to be reviewed by taxonomy curators | Spreadsheet of new organism names to be reviewed by taxonomy curators | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Using Llama to summarize PubMed Central (PMC) full text articles that contain information on protein function | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Inefficient and manual process to assign accurate function to proteins and protein models | This will increase efficiency of curators work in providing up-to-date functional annotation for prokaryotic protein family models for use in the annotation pipeline and in adding this to RefSeq proteins. | Article summaries | 25/04/2026 | b) Developed in-house | No | Article summaries | PMC data | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Leveraging NLP and LLMs to identify and characterize NIH prevention research via 160-topic taxonomy | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | Semi-automate manual curation | More timely assessments of NIH spending in specific areas within prevention research. | Classification of grants by health conditions, risk factors, study designs, and prevention research type. | 25/04/2026 | c) Developed with both contracting and in-house resources | Microsoft Azure, Westat | Yes | Classification of grants by health conditions, risk factors, study designs, and prevention research type. | Publicly available grant information (ApplID, Grant Number, Title, Abstract, Public Health Relevance). | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Driving Efficiency and Expansion of Dietary Supplement Label Database Data through AI | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Enhance database QA and enhance database records acquisition rate | More rapid sourcing of dietary supplement label data and enhanced quality assurance of database records | Increased number of monthly sourced labels from the current rate of 1500/month and increased accuracy of database record data fields | Increased number of monthly sourced labels from the current rate of 1500/month and increased accuracy of database record data fields | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | AI-enabled User Support and Impact Monitoring | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | User support for DSLD data inquiries and monitoring of DSLD usage and impact in the research community. | Support AI adoption in the federal workforce, offer new ways to address the complex analytical needs of DSLDs super-user community by allowing for a deeper exploration of the data than the standard DSLD web interface can provide, and provides the opportunity to evaluate the AI tool and to make incremental improvements by fine-tuning the model and interface. | Chatbot-style interface | Chatbot-style interface | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Section 508 Azure OpenAI Chatbot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | This tool will streamline access to compliance resources, guidance, and support, empowering NIH staff and stakeholders to meet federal accessibility requirements. Designed to efficiently handle help desk inquiries, the chatbot will adhere to NIHs security protocols and provide accurate, role-specific information. | This tool will streamline access to compliance resources, guidance, and support, empowering NIH staff and stakeholders to meet federal accessibility requirements. Designed to efficiently handle help desk inquiries, the chatbot will adhere to NIHs security protocols and provide accurate, role-specific information. | Recommendations to Section 508 resources and guidance | 25/06/2026 | c) Developed with both contracting and in-house resources | Summome | No | Recommendations to Section 508 resources and guidance | Currently using Azure OpenAI's training model and MSFT CoPilot, but planned development to use knowledge repository through RAG | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Enhancing the RCDC with Generative AI | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Generative AI | The goal is to assess whether generative AI can enhance the RCDC process by minimizing time-consuming, resource-intensive routine and manual tasks and improving overall efficiency. | Gen AI is expected to save time, reduce manual workload, and improve productivity by automating resource-intensive processes. These efficiencies support the agency's mission by enabling better resource allocation and enhanced categorization of NIH research. | The Azure cloud platform provides secure access to OpenAIs ChatGPT model, which is connected to search indexes pre-loaded with relevant datasets. These datasets include internal agency-provided data, such as meeting transcripts, and publicly available data from NIH RePORTER. ChatGPT responds to prompts to execute various tasks such as summarizing meeting transcripts and notes, and the scientific content within curated sets of grant applications. ChatGPT is also utilized to recommend an appropriate RCDC category for a grant application and provide explanations for its recommendations. Additionally, ChatGPT is prompted to predict semantic types for thesaurus concepts, identify hierarchical relationships between concepts, cluster similar concepts, and suggest synonyms for specified terms. | 24/05/2026 | a) Purchased from a vendor | Microsoft | No | The Azure cloud platform provides secure access to OpenAIs ChatGPT model, which is connected to search indexes pre-loaded with relevant datasets. These datasets include internal agency-provided data, such as meeting transcripts, and publicly available data from NIH RePORTER. ChatGPT responds to prompts to execute various tasks such as summarizing meeting transcripts and notes, and the scientific content within curated sets of grant applications. ChatGPT is also utilized to recommend an appropriate RCDC category for a grant application and provide explanations for its recommendations. Additionally, ChatGPT is prompted to predict semantic types for thesaurus concepts, identify hierarchical relationships between concepts, cluster similar concepts, and suggest synonyms for specified terms. | Text data from meeting transcripts and notes, and publicly available grant data from NIH RePORTER database. The data is indexed using Azure Cognitive Search AI and securely stored in an Azure cloud storage container. A Retrieval-Augmented Generation (RAG) chatbot is employed to retrieve the indexed data, and prompts are developed to effectively query the data and generate responses using ChatGPT. LLMs respond in a manner that can cite specific language in the data sources, allowing subject matter experts to validate LLM-generated outputs. | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NIH Travel Policy AI Chatbot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | The AI-powered chatbot is designed to address a key operational challenge: the high volume of routine travel policy inquiries directed toward NIH administrative staff. These questions, often repetitive and time-consuming, divert valuable human resources from more complex and strategic responsibilities.Specifically, the AI chatbot leverages Generative Artificial Intelligence (AI) to provide accurate, consistent, and real-time responses to questions related to the Federal Travel Regulation (FTR) and the NIH Travel Policy Handbook. By doing so, it significantly reduces the dependency on staff to manually research and craft responses to standard inquiries. In addition, the chatbot serves as the foundation for a self-service portal for the NIH community. This portal empowers employees and stakeholders to independently access authoritative travel policy guidance 24/7improving efficiency, enhancing user experience, and ensuring policy compliance across the organization. | The Office of Financial Management (OFM) envisions this AI-powered chatbot as a critical support tool for NIH staff during the upcoming transition to a self-service travel planning model, aligned with the government-wide shift to GSAs ETS Next travel system, recently named GO.gov. With this transition slated to begin in 2026, the chatbot will serve as an intelligent, always-available assistant that simplifies the travel planning process for thousands of NIH employees. Enhanced Operational Efficiency.Improved User Experience for NIH Staff.Support for a Modern, Self-Service Government.Increased Compliance and Accuracy.Indirect cost benefit to public. Up to 6080% reduction in inquiry volume handled by human agents. 160 FTE hours saved per month at the central NIH Travel office. 1000 FTE hours saved per month across the IC Community Travel offices. Drastic reductions in average response times, often from days or hours down to seconds. | The chatbot outputs text-based, interactive responses tailored to helping NIH staff plan and manage official travel in compliance with policy. These outputs are designed to be helpful, policy-compliant, user-specific, and non-decisional, serving as a productivity aid rather than an authority for travel approval. | 25/08/2026 | c) Developed with both contracting and in-house resources | Infer Solutions | No | The chatbot outputs text-based, interactive responses tailored to helping NIH staff plan and manage official travel in compliance with policy. These outputs are designed to be helpful, policy-compliant, user-specific, and non-decisional, serving as a productivity aid rather than an authority for travel approval. | Federal Travel Regulation and NIH Travel Policy Handbook documents | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | PowerAutomate Delinquent Submissions | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Other | Senior Auditors are required to spend significant timeapproximately 25% of an FTE per auditor annuallyon manually monitoring award recipients submission status, contacting recipients, providing oversight, and documenting compliance at multiple intervals leading up to the fiscal year end. | By automating these processes, the AI will enable auditors to redirect roughly 25% of their time from low-value administrative tasks to high-skill activities that generate measurable impact, increasing efficiency, strengthening oversight, and sustaining our divisions annual ROI of approximately 600%. This will ultimately return more funds to NIH from award recipients, directly supporting the agencys mission and benefiting the public through more effective use of federal resources. | Each award recipient that has an audit requirement was built into a SharePoint list including company name, CAGE Code, FYE, CYE, auditor assigned, and oversight (Grants Management, contracting officer, etc). Multiple layers of stacked Power Automate check today's date versus FYE and CYE (for Final incurred cost submissions and Provisionals, respectively) and send email notification to the vendor 6 months before a submission is due. The automation copies the auditor and other government oversight, enters the date the communication was delivered. A second level program runs daily checking until we are 3 months away from the due date and then a similar process with a different email and instructions are delivered to the award recipient. Finally, another layer of Power Automate calculates when a submission is delinquent and notifies the company of the implications of late submission with auditor and Government Oversight on copy. | 25/07/2026 | b) Developed in-house | No | Each award recipient that has an audit requirement was built into a SharePoint list including company name, CAGE Code, FYE, CYE, auditor assigned, and oversight (Grants Management, contracting officer, etc). Multiple layers of stacked Power Automate check today's date versus FYE and CYE (for Final incurred cost submissions and Provisionals, respectively) and send email notification to the vendor 6 months before a submission is due. The automation copies the auditor and other government oversight, enters the date the communication was delivered. A second level program runs daily checking until we are 3 months away from the due date and then a similar process with a different email and instructions are delivered to the award recipient. Finally, another layer of Power Automate calculates when a submission is delinquent and notifies the company of the implications of late submission with auditor and Government Oversight on copy. | Data regarding NIH DFAS audited entities over prior year assignments. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/OASH/OIDP | ABE (AI-driven. Beneficial. Efficient.) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | By centralizing federal HIV program information in a custom Knowledge Base, ABE addresses the challenges of fragmented data and inefficient workflows, supporting operational efficiencies through a reduction in labor force and contracts, and providing the ability to quickly access, summarize, analyze, and create resources. | ABE is expected to deliver key benefits by streamlining operations, reducing costs, and improving data-driven decision-making related to HIV/AIDS programs. ABE will serve as a mechanism to expose other federal partners to the usage of AI within a protected knowledge base, increasing work output efficiencies and fostering federal partnerships through expanded Knowledge Base assets and usage. | generative AI content, images, summarizations, and analysis based on federally approved assets. Knowledge Base is in compliance with Gender Ideology and Preventing Woke AI Executive Orders | 24/01/2026 | c) Developed with both contracting and in-house resources | ICF, DataSurge | No | generative AI content, images, summarizations, and analysis based on federally approved assets. Knowledge Base is in compliance with Gender Ideology and Preventing Woke AI Executive Orders | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/ACF/OA | Sub Can Line Finder | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | The LLM helps compile an aggregated report of costs by intaking a specific CAN, category, sub CAN line item description, and a projection cost from the user (either by a CSV or manual entry) and returning line items from the open-closed report and requisition purchase order report related to the sub CAN line item description such as supplier name, document number, document type, and total obligations for that sub CAN line item. | The Sub CAN Line finder LLM helps find related obligations, line items from various reports and returns information crucial to a budget officer when creating a spend plan for their office. The LLM aids ACF Discover track various line items and pulls the information into one place: the Spend Plan module where ACF can track yearly budgets and monitor budget health. | The LLM helps compile an aggregated report of costs by intaking a specific CAN, category, sub CAN line item description, and a projection cost from the user (either by a CSV or manual entry) and returning line items from the open-closed report and requisition purchase order report related to the sub CAN line item description such as supplier name, document number, document type, and total obligations for that sub CAN line item. | 24/10/2026 | a) Purchased from a vendor | Palantir Technologies | Yes | The LLM helps compile an aggregated report of costs by intaking a specific CAN, category, sub CAN line item description, and a projection cost from the user (either by a CSV or manual entry) and returning line items from the open-closed report and requisition purchase order report related to the sub CAN line item description such as supplier name, document number, document type, and total obligations for that sub CAN line item. | N/A- using an integration of OpenAI as our model | Yes | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/OCIO | https://healthdata.gov | AI Harvest Service | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | The Harvest Service addresses inconsistent or incomplete metadata across distributed HHS open data sources by normalizing descriptions, creating concise summaries, and tagging data assets for improved searchability | Enhanced discoverability of HHS data from distributed agency-owned systems; improved user accessibility of health- and human-related data; faster identification of relevant data assets for researchers, policymakers, and the public; and increased value of open data investments. | AI-generated concise data asset descriptions and standardized metadata tags integrated into HHS Data Hub asset records; improved open data records surfaced on HealthData.gov for public consumption. | 25/07/2026 | a) Purchased from a vendor | Tyler Technologies | Yes | AI-generated concise data asset descriptions and standardized metadata tags integrated into HHS Data Hub asset records; improved open data records surfaced on HealthData.gov for public consumption. | Open metadata and data asset descriptions from multiple HHS OpDiv/StaffDiv open data portals. No PII is used. | https://healthdata.gov | No | k) None of the above | No | ||||||||||||
| Department Of Health And Human Services | HHS/OCIO | Federal Assistant AI Agent | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | HHS and its Family of Agencies have a diverse, spread out basis of content that can be confusing to navigate and requires a user to have knowledge of HHS org structure to be able to obtain answers to their questions; leveraging Agentic AI using Turnkey app can reduce burden on citizens to find answers to their questions about HHS' services, and can reduce the volume of calls to HHS agency contact center representatives to answer common questions | Allows end-users to leverage a single point of entry and plain language to find information across all HHS and its family of agency websites, allowing only authoritative government sources, to provide answers to questions in a conversational manner without requiring the user to be able to navigate and understand a complex web of content and bureaucracy. Reduction in costs for contact centers via lower call volume and decreased time spent per calls. | Plain language, conversational responses to customer inquiries via chatbot, providing sources for information as needed to increase accuracy of information and direct end-users to relevant resources. | Plain language, conversational responses to customer inquiries via chatbot, providing sources for information as needed to increase accuracy of information and direct end-users to relevant resources. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/OS/ASFR | Similar Opportunities (KNN) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Agencies do not receive the highest number of proposals from capable applicants as applications generally tend to come from the same set of applicants. | Increase the quality and capability of applicants submitting grant proposals to federal agencies. | Match of agency grant requirements to competent applicants | 24/10/2026 | a) Purchased from a vendor | MicroHealth | Yes | Match of agency grant requirements to competent applicants | Grants.gov public website | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/OS/ASFR | Applicant Help Chatbot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Answer user questions about Grants.gov | Faster answers to questions at a lower cost | System help documentation and funding opportunity listings | 19/05/2026 | a) Purchased from a vendor | Business Performance Systems | Yes | System help documentation and funding opportunity listings | Grants.gov public website | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/ASFR/OG | GrantSolutions Text Analyzer Tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Program staff authoring NOFOs need capabilities to simplify language and ensure NOFOs remain compliant with the Plain Writing Act of 2010. | Automates repetitive workflows, reduces manual effort, delivers actionable insights, and accelerates accurate, transparent decisions across the grant lifecycle. | Displays Flesch-Kincaid grade level and an overall readability score of NOFO text. | 23/07/2026 | c) Developed with both contracting and in-house resources | Internal Federal Shared Service (GrantSolutions) | Yes | Displays Flesch-Kincaid grade level and an overall readability score of NOFO text. | Historical grant documents | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/ASFR/OG | GrantSolutions AI Writing Assistant | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Generative AI | Program staff authoring NOFOs need capabilities to simplify language and ensure NOFOs remain compliant with the Plain Writing Act of 2010. | Automates repetitive workflows, reduces manual effort, delivers actionable insights, and accelerates accurate, transparent decisions across the grant lifecycle. | Generates a simplified rewrite of the selected text and presents a side-by-side comparison with the original. | 25/05/2026 | c) Developed with both contracting and in-house resources | Internal Federal Shared Service (GrantSolutions) | Yes | Generates a simplified rewrite of the selected text and presents a side-by-side comparison with the original. | Historical grant documents | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/ASFR/OG | GrantSolutions Recipient Risk Tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Grant managers need an efficient method to conduct risk assessments before issuing financial assistance awards to prevent fraud, waste, and abuse. | Automates repetitive workflows, reduces manual effort, delivers actionable insights, and accelerates accurate, transparent decisions across the grant lifecycle. | Generates a risk score for each prospective recipient and lists the top contributing factors (e.g. prior findings) and data sources. | 19/03/2026 | c) Developed with both contracting and in-house resources | Internal Federal Shared Service (GrantSolutions) | Yes | Generates a risk score for each prospective recipient and lists the top contributing factors (e.g. prior findings) and data sources. | Historical grant documents | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/ASFR/OG | GrantSolutions Non?Competing Continuation Approval Tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Grant managers need an efficient workflow to identify and analyze differences in non-competing continuation budgets and narratives from year to year. | Automates repetitive workflows, reduces manual effort, delivers actionable insights, and accelerates accurate, transparent decisions across the grant lifecycle. | Produces Non?Competing Continuation eligibility recommendations for grants staff to act on. | 21/12/2026 | c) Developed with both contracting and in-house resources | Internal Federal Shared Service (GrantSolutions) | Yes | Produces Non?Competing Continuation eligibility recommendations for grants staff to act on. | Historical grant documents | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/ASFR/OG | GrantSolutions Helpdesk Agent | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Generative AI | GrantSolutions users need an efficient method to get answers to their login and access-related questions. | Automates repetitive workflows, reduces manual effort, delivers actionable insights, and accelerates accurate, transparent decisions across the grant lifecycle. | Provides information and helpful guidance to resolve common account related questions. | 25/07/2026 | c) Developed with both contracting and in-house resources | Internal Federal Shared Service (GrantSolutions) | Yes | Provides information and helpful guidance to resolve common account related questions. | Historical grant documents | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/ASFR/OG | GrantSolutions Non?Competing Continuation Review Tool | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Service Delivery | Pilot | c) Not high-impact | Not high-impact | Generative AI | Grant managers need an efficient workflow to identify and analyze differences in non-competing continuation budgets and narratives from year to year and ensure narratives promote federal priorities and mission. | Automates repetitive workflows, reduces manual effort, delivers actionable insights, and accelerates accurate, transparent decisions across the grant lifecycle. | Flags potential compliance risks and delivers a structured summary that feeds reporting dashboards and enables secure sharing with other authorized systems. | 25/07/2026 | c) Developed with both contracting and in-house resources | Internal Federal Shared Service (GrantSolutions) | Yes | Flags potential compliance risks and delivers a structured summary that feeds reporting dashboards and enables secure sharing with other authorized systems. | Historical grant documents | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/OCR | ChatGPT | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | c) Not high-impact | Not high-impact | Generative AI | Staffing shortage | More efficient investigations. | ChatGPT is used to break down complex legal concepts in plain language and identify patterns in court rulins impacting Medicaid services. | 25/07/2026 | a) Purchased from a vendor | Open AI | No | ChatGPT is used to break down complex legal concepts in plain language and identify patterns in court rulins impacting Medicaid services. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/OCR | CoPilot Outlook | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | c) Not high-impact | Not high-impact | Generative AI | Staffing shortage | Faster correspondence with public. | Previously used to revised emails | 25/08/2026 | a) Purchased from a vendor | Westlaw | No | Previously used to revised emails | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/SAMHSA | DECIDE | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/SAMHSA | Document drafting and editing | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/SAMHSA | Document summarization | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/SAMHSA | Not required to disclose | AWS Kendra Search tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Kendra address the technical limitations of efficient access to public information. | The SAMHSA STORE houses a tremendous wealth of mission-critical information for public consumption and utilization. Kendra provides an efficient and comprehensive way for the public to access this information. | A comprehensive scan of SAMHSA STORE materials based on a public query in an efficient and effective User Experience. | 25/06/2026 | a) Purchased from a vendor | AWS | No | A comprehensive scan of SAMHSA STORE materials based on a public query in an efficient and effective User Experience. | Not required to disclose | No | k) None of the above | Yes | Not open source (part of AWS Suite of products) | ||||||||||||
| Department Of Homeland Security | CBP | DHS-2705 | Smartphone Information Forensics Triage | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Quickly translate and summarize the content of text messages, prompting the inspecting employee to review the actual raw text if warranted. | Save worker time by providing an alternative to lengthy bitwise forensic device inspections through the application of an expedient triage tool; reduce the number of higher-level inspections. | Translations and summary are shown only on the display/monitor, and are not saved or transmitted. | Translations and summary are shown only on the display/monitor, and are not saved or transmitted. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-313 | Advanced Analytics for X-ray Images (AAXI) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Computer Vision | Reduction of low risk empty commercial vehicles. | AAXI aims to address the problem of anomaly detection in empty commercial vehicles entering the United States at land border ports of entry. The AI models achieve this goal by encoding past X-Ray images of vehicular border crossings in a semantically meaningful way and comparing the current crossing to detect differences amongst the images to identify anomalies. Benefits include enhancement of the capability of humans to consistently detect items of interest/concern present (and possibly concealed) in vehicles crossing into the United States, and increased clearance rate at border crossings so that vehicles operating safely and lawfully may pass through the border faster. | AAXI compares current crossing images to previous crossings of the same tractor/trailer which have been adjudicated by a CBP Officer and determines recommends further review if warranted. | AAXI compares current crossing images to previous crossings of the same tractor/trailer which have been adjudicated by a CBP Officer and determines recommends further review if warranted. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-314 | Advance RPM Maintenance Operating Reporter (ARMOR) | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | Advance Radiation Portal Monitor (RPM) Maintenance Operating Reporter (ARMOR) project provides predictive maintenance of RPMs, detecting issues with the equipment before the issue causes the screening lane to be inoperable | ARMOR will shorten time to service/repair/maintenance of radiation portal monitors by two weeks. ARMOR will allow better distribution of resources (travel, spare parts, etc.) and expected cost decrease could be 25-50%. Through decreased outage time, and prediction of equipment degradation, ARMOR will increase radiological/nuclear (R/N) security on US borders. | The system will provide a listing of malfunctioning RPMs categorized by issue severity and predicted date of failure. The outputs will be used to create service tickets. | The system will provide a listing of malfunctioning RPMs categorized by issue severity and predicted date of failure. The outputs will be used to create service tickets. | |||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2721 | AI Resume & ATS App | a) Pre-deployment – The use case is in a development or acquisition status. | Human Resources | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Speed and scale: During disasters, resume volume spikes and manual review can’t keep up; Format variability: Resumes arrive as PDFs, Word files, text, and scans, making uniform processing difficult; Consistency: Different reviewers apply criteria differently, leading to uneven shortlists and missed candidates; Traceability and defensibility: Hiring choices must be fast, explainable, and audit ready; Searchability: Teams need to quickly find candidates with specific qualifications (for example, 5+ years of EMS experience); Centralization: Candidate information is scattered across files and events, slowing coordination and handoffs. | How AI helps: Reads every format: Scans and reads PDFs, Word files, text, and images so all resumes can be processed; Extracts key details: Pulls out skills, certifications, education, locations, roles, dates, and years of experience from free form text; Understands the job: Reads job descriptions, separates must have and nice to have qualifications and applies appropriate weights; Matches and scores: Compares each resume to the job and calculates a clear match score; Ranks candidates: Sorts candidates by score to produce a prioritized shortlist; Highlights strengths and weaknesses: Summarizes where a candidate aligns well and where they fall short; Flags critical gaps: Calls out missing must have requirements (for example, licenses, certifications, clearances, or minimum years); Explains results: Shows the evidence behind each recommendation, what matched and what didn’t; Conversational search (optional): Lets HR ask plain language questions about the candidate pool (for example, “Show EMS candidates with 5+ years in Region 2”); Human oversight: Routes sensitive or low confidence cases to HR/SMEs for review before moving forward. Benefits: Faster: Processes resumes in seconds and handles very large volumes during surge events; More consistent: Applies the same criteria to every resume, reducing variation across reviewers; Better decisions: Ranked lists with clear reasons improve triage and interview selection; More efficient: Cuts manual screening time so staff can focus on final se-lection and onboarding; Transparent and reviewable: Captures inputs, scores, explanations, and reviewer actions to support audits and continuous improvement. | Structured candidate profiles: Standard fields (skills, certifications, education, location, years of experience) enable fair comparison and precise search/filtering; Ranked lists with match scores: Orders candidates by fit to required and preferred qualifications for fast, defensible shortlisting; Clear explanations: Shows the specific evidence behind each score, including matched items and gaps, to support transparent decisions; Gap/mismatch flags: Highlights missing or insufficient requirements to speed triage and targeted follow up; Dashboards and exportable reports: Filters (for example, location, availability, qualifications) that help HR slice results and coordinate next steps; Optional human in the loop checks: Configurable SME/HR validation for high impact roles or edge cases, maintaining human control over outcomes. | Structured candidate profiles: Standard fields (skills, certifications, education, location, years of experience) enable fair comparison and precise search/filtering; Ranked lists with match scores: Orders candidates by fit to required and preferred qualifications for fast, defensible shortlisting; Clear explanations: Shows the specific evidence behind each score, including matched items and gaps, to support transparent decisions; Gap/mismatch flags: Highlights missing or insufficient requirements to speed triage and targeted follow up; Dashboards and exportable reports: Filters (for example, location, availability, qualifications) that help HR slice results and coordinate next steps; Optional human in the loop checks: Configurable SME/HR validation for high impact roles or edge cases, maintaining human control over outcomes. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2749 | Real-Time Language Translation Services | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | The AI is intended to solve the problem of ICE personnel often lacking timely access to interpreters in offline or low-connectivity environments, making it difficult to communicate with individuals who speak little or no English during operations. | These tools help reduce delays caused by language barriers in the field and support clearer two-way communication during routine, special, and emergency operations. They reduce reliance on ad hoc workarounds when interpreters are not immediately available and allow personnel to focus more on mission activities rather than the logistics of basic communication. | The planned platforms and mobile applications use AI translation models to convert spoken or written language between English and other languages in near real time. Personnel can speak or type into the tool, which then provides translated text or audio to support two-way conversations during field operations, interviews, and removal processes. The tools are designed to function in offline or low-connectivity environments where possible, recognizing the challenging conditions in which ICE personnel often operate. | The planned platforms and mobile applications use AI translation models to convert spoken or written language between English and other languages in near real time. Personnel can speak or type into the tool, which then provides translated text or audio to support two-way conversations during field operations, interviews, and removal processes. The tools are designed to function in offline or low-connectivity environments where possible, recognizing the challenging conditions in which ICE personnel often operate. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-131 | Automated Target Recognition (ATR) Developments for Standard Screening | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | a) High-impact | High-impact | Computer Vision | AIT systems need to use Automated Target Recognition algorithms to detect objects while maintaining passenger privacy. | The purpose of this use case is to improve upon Automated Target Recognition (ATR) algorithms used to reduce privacy concerns because a TSO is no longer required to view Advanced Imaging Technology (AIT) images. The expected benefits are to increase detection, reduce false alarms, and improve efficiency and passenger experience. | The system reproduces the threat location, which is viewed as a bounding box on a representative human figure, for TSO resolution. | The system reproduces the threat location, which is viewed as a bounding box on a representative human figure, for TSO resolution. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-132 | Accessible Property Screening (APS) Checkpoint CT Prohibited Items (PI) Detection | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | a) High-impact | High-impact | Computer Vision | The AI solution is needed to look for and classify non-explosive prohibited items, because this AI solution alongside the legacy explosive algorithms provides a complete solution for TSA. AI models can process vast amounts of data in real-time, identify anomalies, and provide a way to quickly evolve in identifying new threats with a speed and accuracy that humans cannot match. Once this AI solution is tested in the lab and in the field, TSA will have the capability of using Image on Alarm Only to enable TSOs to bolster accuracy while focusing on human-centered priority tasks. | This AI helps airplane luggage checks by the TSA officers scanning bags to have a continuous always watching partner to alert them to anything suspicious. Currently, a Transportation Security Officer (TSO) who is assigned to every X-ray equipment at an airport checkpoint visually inspects each image. This officer resolves the system generated explosive alarms as well as visually inspecting the image for the presence of non-explosive prohibited items such as guns and sharp objects (see TSA Travel site). TSA is working on developing new Artificial Intelligence/Machine Learning (AI/ML) algorithms to automate the search for the non-explosive prohibited items (e.g. guns, knives, etc.). Once a threat is found, the algorithm displays bounding boxes around the suspect item for the operator to then investigate and adjudicate. These AI solutions benefit the public by providing a consistent and uninterrupted level of threat detection as an added layer of security. The ML algorithms allow the TSA officers to be more flexible and to better prioritize their attention on important items to improve security. | AI system output is a set of 3-dimentional bounding boxes that is displayed on the X-ray image. The bounding boxes are placed on top of objects or areas where the algorithm believes it has found a prohibited item (threat object). | AI system output is a set of 3-dimentional bounding boxes that is displayed on the X-ray image. The bounding boxes are placed on top of objects or areas where the algorithm believes it has found a prohibited item (threat object). | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-133 | Walk-Through Metal Detector (WTMD) Alternative Automated Target Recognition (ATR) Developments | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | a) High-impact | High-impact | Computer Vision | Improve detection over traditional WTMD to include non-metallic threats. | Artificial intelligence (AI)-enhanced Millimeter wave (mmWave) detectors are used as an alternative to Walk-Through Metal Detectors (WTMDs) for passenger screening to detect both metallic and non-metallic threats and prohibited items on passengers at the security checkpoint. These detectors will provide both increased security and a better passenger experience. | The AI outputs target coordinates to the operator viewing station which is viewed as a bounding box on a representative human figure. | The AI outputs target coordinates to the operator viewing station which is viewed as a bounding box on a representative human figure. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-134 | Synthetic data for improved Automated Threat Recognition (ATR) in checkpoint screening | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | a) High-impact | High-impact | Generative AI | Images are used to train Automated Threat Recognition (ATR) and other AI models and systems to detect prohibited items in the screening processes. | To create synthetic data that can be used to improve Automated Threat Recognition (ATR) algorithm development. Synthetic data can be quicker to produce which will improve effectiveness by addressing and adapting to new threats quicker. Accessible Property Screening (APS) and On-Person Screening (OPS) are working with vendors and evaluating AI-based synthetic data generation techniques to bolster the pool of training data available to develop machine learning algorithms in ATR applications. | Images that mimic the human body and various threats. | Images that mimic the human body and various threats. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2399 | Electronic Evidence/Video Recording Transcription and Summarization Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Generative AI | This will cut investigative processing time significantly, while also increasing accuracy. | The Transportation Security Administration (TSA) uses body-worn cameras that incorporate artificial intelligence (AI) technologies as a part of the underlying software which transcribes and translates video footage. The technology provides rapid access to Law Enforcement Officer (LEO) and Investigative data, through transcription. This will cut investigative processing time significantly, while also increasing accuracy. | The AI will transcript audio/video data and provide a printable artifact. The AI only provides recommendations the final usable data is reviewed and certified by TSA staff. | The AI will transcript audio/video data and provide a printable artifact. The AI only provides recommendations the final usable data is reviewed and certified by TSA staff. | |||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-2305 | USCIS Document Translation Service | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | a) High-impact | High-impact | Generative AI | Reduce the man hours needed for translating evidence documents by leveraging AI document translation technology | Integrate AI and GenAI models trained on relevant subject matter (e.g., immigration law, visa/immigration applications, family/adoption certificates, and other sourced processing materials) to provide fast, accurate translation of written or other digital documents in various languages. Real-time Interpretation: Utilizing AI and GenAI-powered speech-to-speech, speech-to-text translation tools for efficient communication, consultations, and other interactions within DHS. Across all DHS components the need to support language translation and transcription is crucial for operations and adjudications. | The service delivers an image-to-image translation that is displayed side by side with the original document to aid officer in reviewing the evidence and preparing for the interview. | The service delivers an image-to-image translation that is displayed side by side with the original document to aid officer in reviewing the evidence and preparing for the interview. | |||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-2514 | USCIS Speech Translation Service | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Provide efficient communication, consultations, and other live applicant interactions in USCIS offices. | Reduce the manpower needed to verbally communicate with applicants and ensure that they are directed in an accurate and efficient manner. | Provides speech to speech and speech to text translation in multiple languages through government-issued iPads. | Provides speech to speech and speech to text translation in multiple languages through government-issued iPads. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-194 | AI Enabled Autonomous Underwater Vehicle | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Computer Vision application onboard the ROV detects items of interest (IoI). The ROV also uses AI for collision avoidance. Once an IOI is detected, a bounding box is placed around the suspected IoI in the image and the user is notified of an alert of a potential IOI. The user then reviews the image and decides if a dive team is required for further inspection. The AI output does not serve as a principal basis for any decisions or actions. | Computer Vision | Customs and Border Protection (CBP) desires to identify potential Items of Interest (IoI) on vessels more quickly, efficiently, and safely. Provides increased shared situational awareness in real time for CBP and strategic partners, and improves mission planning and agent and officer safety while reducing reactionary gaps. | Customs and Border Protection (CBP) intends to identify potential for Items of Interest (IoI) on vessels through the use of autonomous systems, which will allow CBP to more efficiently and safely increase shared situational awareness, improve mission planning and agent/officer safety, and reduce reactionary gaps. | AI output will include object avoidance, automated mission execution, and may include imagery of potential Items of Interest (IoI). | AI output will include object avoidance, automated mission execution, and may include imagery of potential Items of Interest (IoI). | ||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-234 | Relocatable Multi-Sensor System | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Sensors and CUAS capabilities to support significant events. The AI combines sensor data from RF detections / radar/ and infrared / electro optical cameras. It combines all feeds and provides detections on the user interface back to users. Fully standalone and airgapped system. Looking to identify non-RF drones. System does not have any mitigation capability, system cannot autonomously mitigate an aircraft, it only detects and provides a digital track on the GUI/map for further user investigation. This use of AI does not serve as a principal basis for decision or action. | Classical/Predictive Machine Learning | Border security, detection of Small Unmanned Aircraft Systems (SUAS), and multi-sensor fusion. | The system uses advanced sensor technology to differentiate valid items of interest (IOI), such as unmanned aircraft systems and humans, from other detections such as animals or other environmental objects. By integrating radar and other sensor data, the system filters out false alarms, ensuring more accurate identification of potential IOI. This capability enhances CBP's ability to focus on legitimate risks while minimizing the time spent on non-threatening activities, improving operational efficiency at border and security checkpoints. | The outputs include real-time data identifying and categorizing potential items of interests, while filtering out false or non-relevant items of interest like animals. These outputs are used to provide situational awareness and support decision-making for CBP personnel. | The outputs include real-time data identifying and categorizing potential items of interests, while filtering out false or non-relevant items of interest like animals. These outputs are used to provide situational awareness and support decision-making for CBP personnel. | ||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2363 | Anomaly Detection COV Structure | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Computer Vision is intended to detect anomalies in non-intrusive inspection images. The images are then shared with officers who review the detections within the images (represented as a polygon). If the officer feels physical review is required, the vehicle is moved to secondary inspection for more thorough review by an officer. The output of the AI does not serve as a principal basis for a decision or action. | Computer Vision | The Anomaly Detection Algorithm (ADA) models are intended to solve several key problems for U.S. Customs and Border Protection (CBP) related to the screening of passenger and cargo vehicles. Improving the detection of anomalies and contraband, enhancing efficiency in image review, enhance human capability to consistently detect items of interest or concern, addressing high traffic volumes and resource constraints, and supporting the analysis of complex inspections. | CBP is seeking Anomaly Detection Algorithm (ADA) models capable of operating on CBP systems to enable rapid screening of commercially owned vehicles (CoVs). The objective is to develop a suite of algorithms that enhance CBP's Non-Intrusive Inspection (NII) image analysis, improving the detection of anomalies and contraband. These algorithms are intended to assist CBP officers in efficiently reviewing images, with a particular focus on identifying concealed contraband and anomalies in passenger vehicles and cargo conveyances. The implementation of ADA models will enhance human capability to consistently detect items of interest or concern, including concealed objects, in vehicles entering the United States. Additionally, these algorithms will enhance throughput efficiency at ports of entry, enabling the expedited processing of compliant vehicles while maintaining robust security standards. | Bounding boxes around an anomaly or unidentifiable object(s) within an image or any portion of the image that cannot be identified or explained. | Bounding boxes around an anomaly or unidentifiable object(s) within an image or any portion of the image that cannot be identified or explained. | ||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2365 | Anomaly Detection POV Structure | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Computer Vision is intended to detect anomalies in non-intrusive inspection images. The images are then shared with officers who review the detections within the images (represented as a polygon). If the officer feels physical review is required, the vehicle is moved to secondary inspection for more thorough review by an officer. The output of the AI does not serve as a principal basis for a decision or action. | Computer Vision | The Anomaly Detection Algorithm (ADA) models are intended to solve several key problems for U.S. Customs and Border Protection (CBP) related to the screening of privately owned vehicles (PoVs). Improving the detection of anomalies and contraband, enhancing efficiency in image review, enhance human capability to consistently detect items of interest or concern, addressing high traffic volumes and resource constraints, and supporting the analysis of complex inspections. | CBP is seeking Anomaly Detection Algorithm (ADA) models capable of operating on CBP systems to enable rapid screening of passenger and cargo vehicles. The objective is to develop a suite of algorithms that enhance CBP's Non-Intrusive Inspection (NII) image analysis, improving the detection of anomalies and contraband. These algorithms are intended to assist CBP officers in efficiently reviewing images, with a particular focus on identifying concealed contraband and anomalies in passenger vehicles and cargo conveyances. The implementation of ADA models will enhance human capability to consistently detect items of interest or concern, including concealed objects, in vehicles entering the United States. Additionally, these algorithms will enhance throughput efficiency at ports of entry, enabling the expedited processing of compliant vehicles while maintaining robust security standards. | Bounding boxes around an anomaly or unidentifiable object(s) within an image or any portion of the image that cannot be identified or explained. | Bounding boxes around an anomaly or unidentifiable object(s) within an image or any portion of the image that cannot be identified or explained. | ||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2565 | CBP Common AI Service(CCAIS) Image Analysis and Data Correlation | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case is not yet live in production, and the current ATT only covers for within the CCAIS platform. The use case needs further review, prior to the ATT for ICAD integration. The model can identify people, animals, and vehicles as well as extract license plate information, and we will need to determine how exactly the extracted information will be used for operations if it is determined to be feasible. The AI inventory team will need to review once the AI Use Case is fully defined. | Generative AI | Reduces the amount of time agents needs to spend analyzing imagery but automatically flagging images for review. | The AI’s intended purpose is to solve the challenge of efficiently monitoring typically unoccupied or restricted environments for unauthorized human or vehicle presence. It automates the initial detection of such activities, which can be resource-intensive and prone to delays when done manually. The expected benefits include enhanced security of sensitive areas, increased operational efficiency through automation, and the ability to scale oversight and respond more effectively to potential incidents, and better protection of public assets, sensitive environments, and critical infrastructure, alongside more efficient use of public resources in security operations. | The AI System outputs image analysis of what is in the image. It can Identify vehicles and determine the type and license plate number. It can identify if people are present and if they are armed. It can identify environment and conditions. The output is providing through bounding boxes around items of interest within the image and through textual descriptions. | The AI System outputs image analysis of what is in the image. It can Identify vehicles and determine the type and license plate number. It can identify if people are present and if they are armed. It can identify environment and conditions. The output is providing through bounding boxes around items of interest within the image and through textual descriptions. | ||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2568 | Non-intrusive vessel and object detection tool (Tethered Aerostat Radar System (TARS)) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI will analyze images from the sensors to determine what the item is in the image (e.g. Vessel). The image location will also be correlated with any publicly available AIS (Automated Identified System) data for maritime traffic. AIS is used as a filtering mechanism in most cases to filter out legitimate traffic. User is provided with information for detected objects lacking AIS signal. After the alert of a detection, a trained CBP agent or user, reviews the image to identify and classify the activity taking place. This use of AI does not serve as the principal basis for decisions or actions. | Computer Vision | Detect, identify and classify vessels and objects in the maritime environment | Utilizing AI to automate maritime object detection with real-time outputs to streamline data flows intended to increase efficiency of existing resources and minimize mission critical decision-making timeline. | The AI will analyze images from the sensors to determine what the item is in the image (e.g. Vessel). The image location will also be correlated with any publicly available AIS (Automated Identified System) data for maritime traffic. AIS is used as a filtering mechanism in most cases to filter out legitimate traffic. User is provided with information for detected objects lacking AIS signal. After the alert of a detection, a trained CBP agent or user, reviews the image to identify and classify the activity taking place. This use of AI does not serve as the principal basis for decisions or actions. | The AI will analyze images from the sensors to determine what the item is in the image (e.g. Vessel). The image location will also be correlated with any publicly available AIS (Automated Identified System) data for maritime traffic. AIS is used as a filtering mechanism in most cases to filter out legitimate traffic. User is provided with information for detected objects lacking AIS signal. After the alert of a detection, a trained CBP agent or user, reviews the image to identify and classify the activity taking place. This use of AI does not serve as the principal basis for decisions or actions. | ||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-317 | RAPTOR (Rapid Tactical Operations Reconnaissance) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI system processes data from radar, infrared sensors, and video surveillance to detect and track suspicious activities along U.S. borders. By incorporating AI-powered vessel registration, aircraft tail number, license plate, and object detection RAPTOR significantly boosts domain awareness. If the OCR can capture vessel or license plate information, that information is run through CBP Super Query and if derogatory information is returned or other information from the imagery (high number of gas cans in the image) the information is sent through SMS or email to the field for response to the potential activity. Agents log into RAPTOR and reviews for accuracy/validity of image to avoid any investigation of the wrong boat. Neither the AI or output of AI serves as a principal basis for a decision or action. Human review takes place prior to any final decision to act, and then personal interaction leads to any follow-up decisions. | Computer Vision | Provide Tactical Domain Awareness for CBP Agents, making law enforcement efforts more efficient. | RAPTOR will significantly increase domain awareness and the agency’s ability to engage in intelligence-driven operations. The AI capability acts as a force multiplier and saves personnel from analyzing video feed from a stationary camera and manually noting all boat identifiers, improving their ability to respond quickly to potential threats and gather critical intelligence for law enforcement and border control operations. | Text transcription of vessel registration/documentation data and photographs of the vessel. | Text transcription of vessel registration/documentation data and photographs of the vessel. | ||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-204 | Semantic Search and Summarization for Investigative Data | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case provides a natural language contextual search and summarization capability against existing Reports of Investigation and other investigative data, producing more relevant search responses and investigative insights in the form of data, information leads or connections that HSI personnel can use to inform investigations. The AI output (search queries and responses) provide investigators with easy-to-read search responses that are accompanied by links to source material for further analysis. Any data used to produce these investigative insights are first obtained through legal means and processes for the purposes of law enforcement investigations and do not significantly impact the categories listed in the definitions of “high-impacting AI.” Personnel may use these insights for law enforcement purposes in ongoing investigations with existing targets to assist in activities such as producing risk assessments about individuals or identifying criminal suspects; however, all insights are reviewed and validated by both personnel and supervisors before being included in any official case management system, and do not serve as a principal basis for enforcement decisions. Any enforcement decisions related to these insights are outcomes of the full Federal Investigation Process involving verifying any insights as evidence. | Generative AI | This use case intends to solve the problem of searching and extracting relevant information from large volumes of unstructured investigative data. | The benefits of using this AI include increased efficiency, reduced risk of missing valuable information, and enhanced investigative capabilities. | The outputs of this AI technology are the extracted relevant information and summaries. | The outputs of this AI technology are the extracted relevant information and summaries. | ||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2552 | Mobile Device Forensics for Investigations | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case falls into a presumptive high-impact category under M-25-21 Section 6 (j) due to its role in supporting law enforcement activities, specifically application of digital forensic techniques. However, it does not meet the definition of high-impact AI as outlined in OMB Memorandum M-25-21 because its outputs do not serve as a "principal basis" for decisions or actions with legal, material, binding, or significant effects on an individual or entity's civil rights, civil liberties, or privacy; or human health and safety. The AI capabilities, including classification, decoding, and origin analysis, are designed to assist analysts in prioritizing human review rather than independently determining outcomes. For example, AI-generated tags and app artifacts are suggestions that require manual validation and further investigation before enforcement actions are taken. All outputs are reviewed as part of a broader investigative process before any actions are taken. | Computer Vision | The AI is intended to solve the problem of analysts having to manually organize, decode, and review large volumes of complex mobile device data, which makes it difficult to quickly identify information relevant to an investigation. | AI-generated category tags, app artifacts, and origin classifications allow analysts to more effectively prioritize human review of mobile device data that may be responsive to the investigation. | The platform’s AI outputs include: (1) AI‑suggested category tags for extracted media (videos and images) and apps, based on user‑selected categories (e.g., media: cars, drugs, weapons; apps: chat, spoofing, cryptocurrency);(ii) AI‑decoded app data artifacts, such as chats, contacts, and locations; and (iii) AI predictions about whether a media file was captured on the extracted device or obtained from another source. | The platform’s AI outputs include: (1) AI‑suggested category tags for extracted media (videos and images) and apps, based on user‑selected categories (e.g., media: cars, drugs, weapons; apps: chat, spoofing, cryptocurrency);(ii) AI‑decoded app data artifacts, such as chats, contacts, and locations; and (iii) AI predictions about whether a media file was captured on the extracted device or obtained from another source. | ||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2595 | Open-Source Intelligence for Lead Identification and Targeting | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case falls into a presumptive high-impact category due to its role in supporting law enforcement activities, specifically in identifying individuals who pose risks to community safety or violate U.S. immigration laws. However, it does not meet the definition of high-impact AI as outlined in OMB Memorandum M-25-21 because its outputs do not serve as a "principal basis" for decisions or actions with legal, material, binding, or significant effects on an individual or entity's civil rights, civil liberties, or privacy; or human health and safety. The AI modules, including risk extraction, image analysis, language detection, and AI chat, are designed to augment traditional investigative processes by providing structured annotations and insights for analysts to review. These outputs are explicitly described as supporting tools that require human validation and integration with other government data holdings before any enforcement action is taken. Furthermore, the AI system operates as a supplementary tool, consolidating and organizing information to enhance efficiency, but does not independently produce outcomes that directly affect civil rights, civil liberties, or privacy. The safeguards in place, including human validation and adherence to established legal standards, ensure that the AI outputs remain supportive rather than determinative, confirming that this use case does not meet the high-impact definition. | Natural Language Processing (NLP) | The AI is intended to solve the problem of traditional manual open‑source searches missing relevant identifiers or connections in large volumes of online information. | The platform’s AI capabilities reduce the time and effort required to sift through large datasets, improve the ability to uncover relevant information, and enhance the overall efficiency and effectiveness of ICE enforcement operations. | The platform utilizes AI modules to assist ICE Enforcement and Removal Operations (ERO) in open-source research and investigations. The risk extraction capability uses AI to identify and classify potential risks within documents, such as references to criminal activity or connections to organizations of concern, and generates structured annotations for analysts to review. The platform also includes AI-powered translation, which allows analysts to work with multilingual content, and image analysis, which detects and extracts objects from images linked to documents to provide additional investigative context. Additionally, the system can analyze language within documents to highlight text that may indicate threats or planned violence, drawing attention to sections that require closer examination. An AI chat interface further supports analysts by enabling real-time, conversational queries and responses, making it easier to surface insights and context from large datasets. | The platform utilizes AI modules to assist ICE Enforcement and Removal Operations (ERO) in open-source research and investigations. The risk extraction capability uses AI to identify and classify potential risks within documents, such as references to criminal activity or connections to organizations of concern, and generates structured annotations for analysts to review. The platform also includes AI-powered translation, which allows analysts to work with multilingual content, and image analysis, which detects and extracts objects from images linked to documents to provide additional investigative context. Additionally, the system can analyze language within documents to highlight text that may indicate threats or planned violence, drawing attention to sections that require closer examination. An AI chat interface further supports analysts by enabling real-time, conversational queries and responses, making it easier to surface insights and context from large datasets. | ||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2667 | Global Maritime Intelligence | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case does not meet the definition of "high-impact" as outlined in Section 5 of M-25-21 because its outputs do not serve as a "principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety." Instead, the AI system generates intelligence reports and risk assessments that are used to support human decision-making. Analysts review the outputs and initiate follow-on actions after validating the source data, such as inspections or investigations, ensuring that the AI's outputs are not the sole or principal basis for these decisions. While the use case aligns with a presumed high-impact category due to its critical role in maritime safety and law enforcement, it does not meet the stricter definition of high-impact because its outputs are advisory and produce leads that must be validated as part of the investigative process prior to action being taken. | Classical/Predictive Machine Learning | The AI is intended to solve the problem of investigators having to manually piece together fragmented maritime activity data from many sources, which makes it difficult to see relationships among vessels, shipments, and ports and identify potential leads on illicit maritime activity. | The use of AI in this process helps Homeland Security Investigations quickly identify potential threats, improves the efficiency of intelligence operations, and enables faster responses to maritime risks that would be difficult to detect through manual analysis alone. | The platform uses several machine learning (ML) models and other AI techniques to process and analyze large volumes of maritime data from multiple sources, such as satellite imagery, Automatic Identification System (AIS) signals, and transactional maritime data. The platform’s AI models detect patterns and anomalies that may indicate potential threats or behaviors consistent with illicit activities like smuggling or trafficking. These AI-generated insights are incorporated into detailed intelligence reports and risk assessments for platform users. These outputs support HSI analysts’ decision-making and are reviewed in conjunction with other HSI data holdings to determine whether analysts should take follow-up actions, such as investigations, into flagged entities. | The platform uses several machine learning (ML) models and other AI techniques to process and analyze large volumes of maritime data from multiple sources, such as satellite imagery, Automatic Identification System (AIS) signals, and transactional maritime data. The platform’s AI models detect patterns and anomalies that may indicate potential threats or behaviors consistent with illicit activities like smuggling or trafficking. These AI-generated insights are incorporated into detailed intelligence reports and risk assessments for platform users. These outputs support HSI analysts’ decision-making and are reviewed in conjunction with other HSI data holdings to determine whether analysts should take follow-up actions, such as investigations, into flagged entities. | ||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-48 | Email Analytics for Investigative Data | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case provides more efficient data processing for HSI personnel. The AI output may be used to produce investigative insights in the form of data, information leads or connections that HSI personnel can use to inform investigations, but the output itself is data preparation and organization so HSI personnel can produce those leads when combining the AI output with the personnel’s expertise and other relevant investigative data and information. Personnel may use these insights for law enforcement purposes in ongoing investigations with existing targets to assist in activities such as producing risk assessments about individuals or identifying criminal suspects; however, all insights are reviewed and validated by both personnel and supervisors before being included in any official case management system, and do not serve as a principal basis for law enforcement action or decision. Any enforcement decisions related to these insights are outcomes of the full Federal Investigation Process involving verifying any insights as evidence (including validating AI-translated material by a certified interpreter), presentation to a U.S. Attorney’s Office and potentially a District Court judge, decision to prosecute, judicial review, and trial and sentencing. | Natural Language Processing (NLP) | This use case intends to solve the problem of the time-consuming and resource-intensive process of preparing multilingual email data for analysis. | Homeland Security Investigations (HSI) personnel encounter large volumes of legally acquired, multilingual email data that must be prepared (ingested, triaged, translated, searched and filtered) before it can be analyzed to support investigations. The email analytics workflow eliminates manual data preparation processes, and leverages machine learning to conduct spam message classification, translation, and entity extraction, including names, organizations, or locations. It also utilizes HSI's AI-enabled translation capabilities (see related use case “Translation and Transcription for Investigative Data”) for translation of emails in other languages to English. The output reduces time and resources spent preparing data, increases the analytic utility of the data, and allows HSI personnel to more quickly conduct analysis on the information. | The output is email data that has undergone spam message classification, translation, and entity extraction, including names, organizations, or locations. | The output is email data that has undergone spam message classification, translation, and entity extraction, including names, organizations, or locations. | ||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-417 | Machine Learning Analysis Applied to Cyber Threat Hunt Data | a) Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not produce an action or serve as a principal basis for a decision that has the potential to significantly impact the safety of human-life or well-being, climate or environment, critical infrastructure, or strategic assets or resources. This use case detects anomalies or outliers, and also may be used to classify or categorize certain activity within data that has already been collected during an authorized cyber threat hunt operation within TSA’s networks. The AI outputs are reviewed by a human analyst team to determine if any of the patterns might be associated with unusual activity. The analyst would then continue to investigate further as during normal threat hunt operations. | Classical/Predictive Machine Learning | The Use Case addresses the problem of how to maximally assess available data to identify anomalies and other patterns that may inform the cyber threat hunt process. | Cyber threat hunts typically involve a vast amount of data. Machine learning models can quickly and efficiently process this data as well as more effectively identify anomalous activity than humans. This could improve the efficiency and quality of cyber threat hunts by detecting suspicious behavior more quickly and increasing the amount of data that can be analyzed during a hunt. | Currently it is a list of potential anomalies or outliers within the system, but development is still ongoing. | Currently it is a list of potential anomalies or outliers within the system, but development is still ongoing. | ||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-2627 | Extended Automated Name Harvesting (eANH) | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | No. The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. The use case increases efficiency of tasks associated with the accurate and timely identification, analysis, and review of biographical information needed for adjudication. The AI outputs are suggested aliases and DOBs related to the individual query, which USCIS staff must review to accept, reject, or ignore the suggested information. The AI outputs reduce the amount of adjudicative time spent manually harvesting aliases and DOBs. The use case increases efficiency of tasks associated with reviewing existing records for adjudicating requests for immigration benefits. Completing such adjudications are not dependent on the use case however lack of this tool would significantly increase human processing times and potentially reduce the accuracy of information consulted during the human review process. | Natural Language Processing (NLP) | OIT is developing a solution that systematically extracts text from evidence documents, identifies aliases and DOBs from the extracted text. | Since users no longer need to read through the entire set of case evidence, which is often hundreds of pages, this should decrease case processing time while retaining the same or better performance. | Extracted names and DOBs | Extracted names and DOBs | ||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2362 | AI to generate testable synthetic data | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Improve integration with trade partners and reduce implementation time of new capabilties. | The purpose of using AI to generate test data for trade partners is to create more realistic data for trade partners to use for testing their systems before releasing new ACE capabilities. Currently there are many data issues where test data does not accurately reflect real production data which results in unrealistic failures when testing which result in wasted time and resources tracking down false positive errors during testing. By providing trade partners with more realistic test data the expectation is that testing times will be shorter and enhancements and capabilities can be delivered quicker. | The AI capability would generate test data without PII or other trade sensitive data and allow for more accurate simulation of production data. | The AI capability would generate test data without PII or other trade sensitive data and allow for more accurate simulation of production data. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2375 | Thermal Power Generation with Geoseismic IoI Detection and Classification | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Overcomes the following problems: 1) the cost of replacement batteries, 2) the time it takes agents to constantly replace batteries, and 3) the problem of revealing the UGS location when replacing the batteries. | Utilize seismic sensor data to determine Item of Interest in deployed locations. Increases situational awareness in austere environments and reduces need for battery replacement due to self-charging. | Alert with the classification and confidence interval for the Item of Interest (IoI). | Alert with the classification and confidence interval for the Item of Interest (IoI). | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2444 | API Security Vulnerability Technology | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Discover, ingest, and analyze APIs to create and run thousands of custom attack scenarios against every build prior to production. Catch security vulnerabilities as early in the software development life cycle (SDLC) as possible. Educate and empower security and dev teams on sound API security strategies. | With the necessity of leveraging application programing interfaces (APIs) for applications across the enterprise, this AI technology is intended to run thousands of custom attack scenarios against APIs on a continuous basis. This will help to identify potential security vulnerabilities prior to production level deployments, and enable the enterprise to develop essential remediations, if required. Additionally, the techonlogy is intended to provide continuous monitoring on APIs to provide real-time alerts on new potential vulnerabilities. | The AI system is intended to output custom reports on identified vulnerabilities within application programing interfaces (APIs). These reports will Include the identified risk and a summary of the risk, as well as tailored remediation guidance based on the applications, environment, data, and tests conducted for end-users. | The AI system is intended to output custom reports on identified vulnerabilities within application programing interfaces (APIs). These reports will Include the identified risk and a summary of the risk, as well as tailored remediation guidance based on the applications, environment, data, and tests conducted for end-users. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2446 | Cyber Threat Detection | a) Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Cyber deception is a proactive cybersecurity strategy that involves creating a network of deceptive elements, such as decoys to mislead and divert potential attackers. By strategically deploying these deceptive artifacts across an organization’s network, cyber deception aims to confuse attackers and delay their progress. | Cyber deception is used alongside other cybersecurity measures to enhance overall security posture. Cyber deception not only enhances threat detection but also provides valuable insights into attacker behavior, aiding in the development of more effective defense strategies and minimizing the risk of successful cyberattacks. Cyber deception technology plays a crucial role in enhancing cybersecurity defenses by enabling organizations to detect threats faster and decrease attacker dwell time. | When attackers engage with the deceptions, they reveal their presence and tactics, allowing security teams to detect, analyze, and respond to threats in real time. This proactive approach not only reduces the time attackers spend within the network but also provides valuable insights into their tactics, techniques, and procedures (TTPs). | When attackers engage with the deceptions, they reveal their presence and tactics, allowing security teams to detect, analyze, and respond to threats in real time. This proactive approach not only reduces the time attackers spend within the network but also provides valuable insights into their tactics, techniques, and procedures (TTPs). | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2447 | Forced Labor Virtual Consultant | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Customs and Border Protection (CBP) enforces U.S. laws against importing goods made with forced labor by analyzing extensive data and addressing complex global supply chains. CBP aims to leverage advanced tools to protect U.S. economic security and uphold its leadership in combating forced labor. | The system aims to support Customs and Border Protection (CBP) analysts by integrating internal forced labor databases with preloaded trend analysis data, enabling rapid risk identification and report generation. It complements CBP’s data science team by providing faster access to critical information, enhancing enforcement efficiency. | Plain-language reports, analysis, direct LLM response to user queries. | Plain-language reports, analysis, direct LLM response to user queries. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2449 | Multi-media Insight Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | The AI system solves the challenge of analyzing large volumes of surveillance footage and audio data efficiently. It enables Customs and Border Protection (CBP) to conduct real-time searches for objects, sounds, language, and events, detect anomalies, and track items of interest across multiple video streams concurrently. By providing outputs like real-time alerts and visual annotations, the system enhances situational awareness, streamlines investigations, and supports faster decision-making to improve border security and operational efficiency. | The system aims to enhance CBP's ability to monitor and analyze surveillance footage from existing CBP camera technology, enabling real-time detection of anomalies, and tracking of Items of interest within the video frame. It will improve situational awareness, streamline the object and event identification process, and support faster, more accurate decision-making, ultimately enhancing security and operational efficiency. Essentially, this technology will allow users to review significant amounts of historical imagery to identify objects, scenes, and activities of interest and reduce the manual burden of searching multiple video streams. | Outputs include real-time alerts on activities that should be watched, visual annotation to identify items of interest. | Outputs include real-time alerts on activities that should be watched, visual annotation to identify items of interest. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2452 | Source Code Development Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The tool uses generative artificial intelligence (GenAI) powered by large language models (LLMs) and coding foundation models to assist with software development. The AI serves as a coding assistant, allowing users to create, refine, and complete software projects through natural language commands and queries. It generates functional software code, streamlining the development process, reducing manual effort, and improving efficiency. | The tool is designed to enhance software development efficiency by enabling end users to create, refine, and complete projects more quickly through the use of a generative artificial intelligence (AI) coding assistant. By automating portions of the coding process, the tool reduces development time, optimizes workflows, and improves overall productivity. | The tool generates functional software code for end users, enabling them to efficiently develop, refine, and complete projects with minimal manual effort, delivering high-quality code outputs tailored to user prompts and project requirements. | The tool generates functional software code for end users, enabling them to efficiently develop, refine, and complete projects with minimal manual effort, delivering high-quality code outputs tailored to user prompts and project requirements. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2540 | Open Metadata | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Metadata catalog identification. | The AI uses a large language model to generate proposed descriptions for field names in our data catalog. The tool itself does not come with Generative AI, but we have built the capability to leverage generative AI for description generation and test case quality generation. Roughly 80% of our data assets in databases do not have descriptions. The use of AI to generate these descriptions would allow us to provide descriptions for our vast data assets. The quality test cases ensure that the data in our systems are accurate, correct, and consistent. These AI tools can be turned on or off. | Proposed descriptions for database field names and generation for quality test cases. | Proposed descriptions for database field names and generation for quality test cases. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2551 | Automated Incident Creation (IT) | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to reduce the time and effort required of a business analyst to manually review contact messages submitted to the Business Connection (BC) and Technical Reference Model (TRM) teams. By evaluating these messages, the AI will determine whether a ServiceNow helpdesk ticket is needed to resolve the submitted question or concern. This automation will streamline the process and ensure timely resolution of user issues. | AI will be used to drive automation via integration with ServiceNow, automatically triaging requests to determine if a ServiceNow helpdesk ticket should be created. The determination and justification will populate a field in the table that stores user contact messages, enabling validation by a business analyst before finalizing ticket creation. | Responds with a Yes/No determination and a justification of the determination, which is stored in the contact message table for validation by a business analyst. | Responds with a Yes/No determination and a justification of the determination, which is stored in the contact message table for validation by a business analyst. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2566 | TRM Classification Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The system uses Gemini 1.5 to perform multi-label classifications, including product categorization, subclassification, and reasoning for the classification outputs, replacing an existing platform to reduce costs and improve efficiency. | The system uses AI to categorize vendor product capabilities, reducing costs and improving efficiency by replacing a previous system. The classifications are integrated into the Technical Reference Model (TRM). | Multi-label output of vendor product classifications, subclassifications, using reasoning for its classification output. | Multi-label output of vendor product classifications, subclassifications, using reasoning for its classification output. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2567 | FOIA processing automated redaction (RedactAI) | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This use case leverages AI technology to streamline the identification and redaction of sensitive content in documents related to FOIA requests. Using Vertex AI and other GCP services, the system identifies and categorizes content that may require redaction based on specific controls. This automation is expected to significantly reduce processing times compared to manual redaction, enhancing efficiency in handling FOIA requests. The system provides recommendations for content to be redacted. Users are required to review and manually approved the suggested redactions based on the appropriate FOIA exemptions. | AI is being used to find content that may need to be redacted in documents related to FOIA requests and applicable FOIA exemptions. The benefits are expected to be much faster processing time for the requests as opposed to manually performing the task. | Output is recommendation of content to be redacted. Users are required to review and manually approve the suggested redactions based on the appropriate FOIA exemptions. | Output is recommendation of content to be redacted. Users are required to review and manually approve the suggested redactions based on the appropriate FOIA exemptions. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2569 | Speech Assist Virtual Interview Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Accurate audio processing automation and draft transcription generation to reduce the manual labor intensive associated with transcribing audio interviews and reports, while ensuring Officer-vetted accuracy and nuance. | To enhance operational efficiency by accelerating the processing and analysis of audio data, resulting in both reduced labor burden and compressed workflow timelines. Speech Assist will offer diarized transcriptions of conversations alongside conversation summaries and named entities extracted from audio captured for various CBP mission use cases. The output will be used to automatically draft a document for review by CBP officers and agents. Speech Assist will reduce the time required for the interpretation, and comprehension of key information from audio data, reducing the time required for transcribing audio reports. | The AI system will provide original audio clips alongside feature-rich output reports that include diarized transcriptions, transcript summaries, speaker summaries, and named entities extracted from the audio clips. The diarized transcript will consist of audio segments from the audio clip containing the start and end time of the audio segment, a unique label for the speaker (e.g., Speaker A, Speaker B, etc.), their verbatim speech in the original language. The output is provided as a highly structured nested JSON containing free text that is then used to automatically generate a draft transcription for review. | The AI system will provide original audio clips alongside feature-rich output reports that include diarized transcriptions, transcript summaries, speaker summaries, and named entities extracted from the audio clips. The diarized transcript will consist of audio segments from the audio clip containing the start and end time of the audio segment, a unique label for the speaker (e.g., Speaker A, Speaker B, etc.), their verbatim speech in the original language. The output is provided as a highly structured nested JSON containing free text that is then used to automatically generate a draft transcription for review. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2571 | Business Automation and Improved Search Ability with AI | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The Business Connection (BC ) and Technical Reference Model (TRM) teams have identified areas in which business automation can accelerate use for both business analysts in making determinations and improving the ability of end-users to search for the information that they need more quickly. | AI will be used to drive automation via integration with ServiceNow, automatically classifying products, suggest alternatives, validate the vendors headquarters and operations, summarize meeting notes, and improve searching capabilities. | The AI will output a potential product classification and sub-classification that would best fit the product within the TRM and BC. The vendor vetting feature will output the headquarters of a vendor and a summary of its operations, and a summarization of meeting notes. Searching will output a list of relevant products from a database in ServiceNow. | The AI will output a potential product classification and sub-classification that would best fit the product within the TRM and BC. The vendor vetting feature will output the headquarters of a vendor and a summary of its operations, and a summarization of meeting notes. Searching will output a list of relevant products from a database in ServiceNow. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2586 | CodeGen | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Workforce enablement of AI developers. | The primary purpose is to provide code generation support for CBP developers, with the anticipated benefit of significantly increasing their efficiency and productivity. By automating repetitive coding tasks and suggesting optimal solutions, this initiative aims to free up developers to focus on more complex and strategic projects, ultimately accelerating the delivery of critical work. | The system's output consists of generated code snippets and/or textual explanations and suggestions related to code development. This output is designed to assist developers in writing, understanding, and debugging code more effectively. | The system's output consists of generated code snippets and/or textual explanations and suggestions related to code development. This output is designed to assist developers in writing, understanding, and debugging code more effectively. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2587 | CounselAI | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | CounselAI will minimize the manual replication of work product, strengthen CBP’s ability to provide consistent responses across the agency’s sizable litigation portfolio, and enable OCC to better defend against legal challenges. | CounselAI will allow OCC to be more efficient and effective. The LLM uses existing work product to help identify similar legal challenges and generate successful draft language for use in litigation. The LLM will also save users time with its search and summarization features. | CounselAI will answer user questions and generate content. The AI output is in chat format, answering user questions about the “knowledgebase” of data. The AI output will also include LLM generated content, such as draft responses and documents to be used in litigation. | CounselAI will answer user questions and generate content. The AI output is in chat format, answering user questions about the “knowledgebase” of data. The AI output will also include LLM generated content, such as draft responses and documents to be used in litigation. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2663 | Border Infrastructure Center of Excellence AI (BICE AI) | a) Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The BICE AI system automates the creation of precise legal descriptions, parcel reports, and boundary maps from geospatial data, reducing manual effort, errors, and time. By leveraging generative AI, it ensures accuracy, compliance, and efficiency, enabling real estate professionals to focus on higher-value tasks and strategic decision-making. | The intended purpose of the BICE AI is to automate the generation of precise legal descriptions from geospatial data. It's designed to act as an expert assistant for real estate professionals. The expected benefits are increased efficiency, accuracy, and compliance in creating legal documentation. By automating this process, the system frees up specialists to focus on higher-value tasks, reducing the reliance on manual drafting and review by subject matter experts. | The BICE AI system simplifies documentation and planning tasks for real estate professionals by generating several valuable outputs. Its primary feature is the creation of professional-grade legal descriptions, which convert geospatial data—such as coordinates and bearings—into detailed, accurate text in a natural-language format suitable for legal use. This ensures precision and compliance in property documentation. The system also produces parcel intelligence reports, summarizing ownership details, parcel IDs, and property boundaries by analyzing user-defined areas alongside existing parcel data. These reports help professionals quickly understand property ownership and identify adjacent or impacted parcels. In addition, BICE AI generates metes and bounds maps, offering visual representations of property boundaries that include bearing and distance information. These maps can be exported as image files (e.g., PDF or PNG) for easy inclusion in deeds, permits, and other legal documents. The system also provides domain-specific summaries, delivering tailored insights for specific needs, such as parcel acquisition justifications or property encumbrance reports. These outputs enable real estate professionals to make informed decisions efficiently, enhancing their ability to manage property-related tasks effectively. | The BICE AI system simplifies documentation and planning tasks for real estate professionals by generating several valuable outputs. Its primary feature is the creation of professional-grade legal descriptions, which convert geospatial data—such as coordinates and bearings—into detailed, accurate text in a natural-language format suitable for legal use. This ensures precision and compliance in property documentation. The system also produces parcel intelligence reports, summarizing ownership details, parcel IDs, and property boundaries by analyzing user-defined areas alongside existing parcel data. These reports help professionals quickly understand property ownership and identify adjacent or impacted parcels. In addition, BICE AI generates metes and bounds maps, offering visual representations of property boundaries that include bearing and distance information. These maps can be exported as image files (e.g., PDF or PNG) for easy inclusion in deeds, permits, and other legal documents. The system also provides domain-specific summaries, delivering tailored insights for specific needs, such as parcel acquisition justifications or property encumbrance reports. These outputs enable real estate professionals to make informed decisions efficiently, enhancing their ability to manage property-related tasks effectively. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-311 | Integrated Defense and Security Solutions (IDSS) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This system provides assistance in detecting high-risk parcels in the international commerce space | The system improves the screening efficiency and accuracy of contraband detection in international express consignment and mail inspection. | The system provides a segmented image, highlighting anamolies for further inspection by CBP personnel. | The system provides a segmented image, highlighting anamolies for further inspection by CBP personnel. | |||||||||||||||||||||
| Department Of Homeland Security | CISA | DHS-107 | Malware Reverse Engineering | a) Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This AI capability uses deep learning to assist CISA analysts with understanding the content of malware samples, automating tasks such as triage and indicator extraction. | This use case delivers improved internal government tools for reverse engineering of malware and speeding the development of cyber threat intelligence that can be shared across the government and with CISA partners. Threat actors can leverage the same malware for long periods of time, so having the ability to improve analysis and generation of shareable cyber threat intelligence forces threat actors to spend more resources generating new malware. Machine learning and other analytical tools are leveraged to guide malware analysts and automate elements of the manual reverse engineering process. Automation of tasks such as triage and indicator extraction allow threat hunting analysts to meet high demand and focus more on adversary response. | A report is generated from malware samples submitted to the analysis pipeline that is then used by human analysts to facilitate the malware triage process. Additional recommendations are displayed via plugins to reverse engineering tools. | A report is generated from malware samples submitted to the analysis pipeline that is then used by human analysts to facilitate the malware triage process. Additional recommendations are displayed via plugins to reverse engineering tools. | |||||||||||||||||||||
| Department Of Homeland Security | CISA | DHS-2335 | Draft Tailored Summaries of Media Materials for Different Publication Channels | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This AI capability accelerates the process of drafting summarized content for CISA’s published products. | The Large Language Models (LLMs) will summarize information from historical publications and pre-production documents intended for external publication. | An interface is provided for pre-publication documents to be uploaded. The system will then automatically generate appropriate messaging using approved templates and tag the documents for review prior to publication. | An interface is provided for pre-publication documents to be uploaded. The system will then automatically generate appropriate messaging using approved templates and tag the documents for review prior to publication. | |||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2709 | Spend Plan Analysis GPT | a) Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Agentic-AI | This tool will augment the wider workforce at FEMA to interrogate spend plan data as stored in PBIS/FEMADex to gain quick insights and rapid responses to data calls or requests for information. | The tool will also allow greater insights for the FEMA leadership to understand where FEMA plans to spend their provided budget authority and where it was actually spent. | Leverages Azure Commercial OpenAI within the FEMA system boundary, currently leveraging ChatGPT4o. The tool will provide responses to user queries based on the Spend Plan and Actual execution data. | Leverages Azure Commercial OpenAI within the FEMA system boundary, currently leveraging ChatGPT4o. The tool will provide responses to user queries based on the Spend Plan and Actual execution data. | |||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2712 | Administrative & Productivity Support for IRC Resource Library | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Long-term disaster recovery requires analyzing large amounts of information stored in multiple locations. IRC typically reviews past recovery plans and projects, available federal and state funding options, considers community needs, and other recovery strategies used in the event. High-level reviews are necessary for FEMA Program Areas (IA, PA, etc.) on potential funding gaps and cost share support. Synthesizing that information from the various storage locations, including other departments, IRC SharePoint, and TRAX poses some challenges. | This use case is designed to solve these challenges by reducing inconsistency in manual research, improving access to historical knowledge, and helping staff quickly identify relevant recovery resources. The purpose of this tool is to assist in the identification of patterns in current and past disaster events. Earlier identification will expedite situational awareness and the development of recovery needs and strategies. It allows the IRC team to define a recovery approach and deliver funding and resources that match a community’s needs more quickly. With faster decision-support for FEMA staff, more consistent analysis across regions, and better use of federal resources to support communities after disasters we are more capable of delivery of the FEMA mission. | The AI tool will produce a set of organized, easy-to-read listings, such as bibliographies, recovery projects summaries, evaluate strategies from similar disasters, and recommended approaches tailored to the current available funding. These out-puts will be used by the IRC teams to inform planning discussions, prevent benefit duplication, and guide technical assistance to states and local governments. | The AI tool will produce a set of organized, easy-to-read listings, such as bibliographies, recovery projects summaries, evaluate strategies from similar disasters, and recommended approaches tailored to the current available funding. These out-puts will be used by the IRC teams to inform planning discussions, prevent benefit duplication, and guide technical assistance to states and local governments. | |||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2713 | Individual Assistance Document Translation | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This AI use case addresses three inter-related problems: It eliminates delays associated with human translation of documents originally in non-English languages; it reduces the cost of translation from approximately $40.00/document to pennies per document, and the entire document will be translated, providing IA visibility to all the contents (instead of a summary deemed by a contract staff as representative of the original document) and thus making it possible to make fully informed decisions about the survivors’ applications. | AI will be used because it is a technology that can perform direct translation automatically and fast, regardless document length, enabling faster and more accurate case processing, resulting in improved services to survivors and reduced cost to taxpayers. | The output is the English version of survivor-submitted documents that are non-English in their original version. Both the original submission and the translated version will be stored in the document repository as part of the survivor’s application for assistance. They are substantiating documents that support assistance determinations. | The output is the English version of survivor-submitted documents that are non-English in their original version. Both the original submission and the translated version will be stored in the document repository as part of the survivor’s application for assistance. They are substantiating documents that support assistance determinations. | |||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2717 | Grants Manager Artificial Intelligence ChatBot | a) Pre-deployment – The use case is in a development or acquisition status. | Emergency Management | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The chatbot is intended to solve inefficiencies in FEMA’s grants application review process by streamlining access to policy information, reducing the time and effort required for manual research, and simplifying the interpretation of complex policy inquiries. It addresses the need for accurate, consistent, and accessible information to improve decision-making and enhance the efficiency of the PA Program. | The expected benefits include increased operational efficiency, improved accuracy in policy interpretation, and cost savings for FEMA’s mission. By providing quick, standardized responses, the chatbot supports faster and more equitable processing of grants, ensuring disaster survivors receive timely assistance. The system’s analytics and feedback mechanisms allow for continuous improvement. | The AI system, powered by Azure OpenAI Services, generates outputs such as policy-based responses to user queries, concise summaries of complex policies, and historical chat records for reference. It also provides performance insights through a dashboard, tracking usage patterns and response accuracy, and incorporates user feedback to refine its functionality. | The AI system, powered by Azure OpenAI Services, generates outputs such as policy-based responses to user queries, concise summaries of complex policies, and historical chat records for reference. It also provides performance insights through a dashboard, tracking usage patterns and response accuracy, and incorporates user feedback to refine its functionality. | |||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2718 | RTPD Division Services Desktop Automation | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Our government developers lack the capacity and consistency to produce quality code in a timely manner. Our administrative staff facilitate a multitude of processes that require manual intervention. This requires significant overhead and is prone errors and results in simple steps or actions taking longer than necessary as requests get lost in email or other forms of communication. There is also no audit trail or logging outside of a personal email address or limited access shared mailboxes, making it difficult for others to step in a facilitate activities. | AI Coding Assistants can help identify potential issues with code, help our developers troubleshoot more quickly, and begin complex coding more efficiently. This will also enable a team of developers to build a consistency across resources and a repository of reusable code segments to speed the delivery of new features and functions. | In this use case, we can expect higher quality code, reducing errors and defects in working software. Administrative staff will be able to monitor progress vs facilitating it and focus their attention on higher value mission support activities. | In this use case, we can expect higher quality code, reducing errors and defects in working software. Administrative staff will be able to monitor progress vs facilitating it and focus their attention on higher value mission support activities. | |||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2724 | Program Integrity (RRAD-PI) AI Counter-Fraud Enhancement Measures | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The AI use case aims to further expand and address RRAD-PI’s fraud detection and prevention challenges within disaster recovery programs. Specifically, it seeks to mitigate fraudulent activities, identity theft, and deceptive practices that compromise program integrity, ensuring that resources are allocated efficiently and equitably to eligible individuals and entities. | The expected benefits include:_x000D_ • Enhanced Fraud Detection: Improved identification of fraudulent activities, reducing financial losses and ensuring program integrity._x000D_ • Operational Efficiency: Automation of manual processes, such as document review and identity verification, leading to faster application processing and reduced administrative burden._x000D_ • Proactive Fraud Prevention: Early detection of fraud risks enables timely intervention, minimizing harm and protecting public funds._x000D_ • Improved Resource Allocation: Ensures disaster recovery resources are distributed to legitimate recipients, fostering public trust in government programs._x000D_ • Cross-Agency Collaboration: Facilitates secure data sharing across agencies, enabling a unified approach to combating fraud schemes that span jurisdictions._x000D_ • Public Confidence: Strengthened program integrity enhances public trust in the agency’s ability to manage disaster recovery efforts effectively. | The AI model/system may generate outputs such as:_x000D_ • Fraud Risk Scores: Quantitative assessments of fraud likelihood for transactions, applications, or entities._x000D_ • Anomaly Alerts: Notifications of unusual patterns or behaviors indicative of potential fraud._x000D_ • Network Maps: Visual representations of relationships between entities, highlighting connections to fraudulent actors._x000D_ • Document Analysis Reports: Summaries of inconsistencies, deceptive language, or forgery detected in submitted documents._x000D_ • Real-Time Monitoring Flags: Alerts for suspicious activities requiring immediate intervention._x000D_ • Behavioral Biometrics Insights: Reports on user behavior anomalies, such as unusual typing patterns or device usage._x000D_ • Image/Video Verification Results: Validation of authenticity for submitted visual evidence._x000D_ • Threat Intelligence Updates: Integration of external threat data into fraud detection models._x000D_ • Geospatial Analysis Findings: Location-based discrepancies in claims, such as mismatched disaster relief applications._x000D_ • Cross-Agency Fraud Insights: Aggregated data analysis highlighting fraud schemes across jurisdictions. | The AI model/system may generate outputs such as:_x000D_ • Fraud Risk Scores: Quantitative assessments of fraud likelihood for transactions, applications, or entities._x000D_ • Anomaly Alerts: Notifications of unusual patterns or behaviors indicative of potential fraud._x000D_ • Network Maps: Visual representations of relationships between entities, highlighting connections to fraudulent actors._x000D_ • Document Analysis Reports: Summaries of inconsistencies, deceptive language, or forgery detected in submitted documents._x000D_ • Real-Time Monitoring Flags: Alerts for suspicious activities requiring immediate intervention._x000D_ • Behavioral Biometrics Insights: Reports on user behavior anomalies, such as unusual typing patterns or device usage._x000D_ • Image/Video Verification Results: Validation of authenticity for submitted visual evidence._x000D_ • Threat Intelligence Updates: Integration of external threat data into fraud detection models._x000D_ • Geospatial Analysis Findings: Location-based discrepancies in claims, such as mismatched disaster relief applications._x000D_ • Cross-Agency Fraud Insights: Aggregated data analysis highlighting fraud schemes across jurisdictions. | |||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2726 | Semantic Search, Summarization, and Data/Spatial Visualization for NCR Watch COP/Dashboard | a) Pre-deployment – The use case is in a development or acquisition status. | Emergency Management | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The speed of information, particularly in a developing incident, challenges the capacity of Analysts to collect, sort, source, and validate in real time. The volume of information required to confirm with confidence requires time to review. The time focused on these tasks permit less time for moving from basic situational awareness to providing the contextual value to give national capital region (NCR) stakeholders situational understanding, which allows them to fully utilize the information for operational decision-making. The AI would provide a tool for collection and synthesize of the available data, as well as geospatial context. | Expedited continuous scraping of data from identified official and unofficial open sources and comparison against pre-defined critical information requirements with summarization and sourcing of information for review and approval by the FEMA NCR Watch Analyst for publication improves the efficiency and effectiveness of the NCR Watch by allowing Analysts to focus on consuming and validating information and adding value with context. The self-designated federal, state, local, and tribal (FSLT) partners in the NCR who receive NCR Watch products would benefit from greater fidelity in provided information and ability to use the information more easily for operational decision-making. Anticipate the breadth of sources and speed of review through automation to be increased and allow for created contextualization and analysis products by Analysts based on the improved data production, as well as geospatial automation in combination with incident data. Faster Decision-Making: vast amounts of data instantly, enabling rapid identification of critical information for timely responses. Improved Accuracy: potential of reducing human error by filtering irrelevant data and prioritizing actionable insights. Enhanced Situational Awareness: improve the breadth of information analysis and overall awareness of activities in the NCR. Resource Optimization: helps allocate resources efficiently based on real-time needs. Predictive Insights: forecast potential developments, aiding proactive measures, and reduced Information Overload: streamline data, ensuring decision-makers focus on key priorities. | The anticipated output is a Common Operating Picture platform or Dashboard available to FSLT subscribers (free) in the NCR to display incident reporting and associated contextual information, such as the physical location on an ArcGIS map. Simultaneously, a non-public COP will contain additional contextual information for providing internal FEMA reporting. AI will produce continuous data queries and create alerts for Analysts for items meeting configurable critical information requirements and essential elements of information. AI will deliver configurable summary of sourced material on pre-defined CIRs to Analyst for review and publication to internal and/or external COP/Dashboard. AI will have tools for additional contextual analysis and spatial and data visualization for use in the internal and/or external COP/Dashboard. | The anticipated output is a Common Operating Picture platform or Dashboard available to FSLT subscribers (free) in the NCR to display incident reporting and associated contextual information, such as the physical location on an ArcGIS map. Simultaneously, a non-public COP will contain additional contextual information for providing internal FEMA reporting. AI will produce continuous data queries and create alerts for Analysts for items meeting configurable critical information requirements and essential elements of information. AI will deliver configurable summary of sourced material on pre-defined CIRs to Analyst for review and publication to internal and/or external COP/Dashboard. AI will have tools for additional contextual analysis and spatial and data visualization for use in the internal and/or external COP/Dashboard. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-206 | Title III Semantic Search and Summarization for Translated Content | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This use case intends to solve the problem of efficiently searching through and analyzing large volumes of translated evidentiary data, which can be time-consuming and labor-intensive. | The Title III Semantic Search and Summarization functionality will augment translation and transcription services by extracting relevant data using machine learning and natural language processing for correlation and semantic search. Results can then be summarized using a large language model, giving users a tool to target relevant data only. This capability accelerates investigative analysis by rapidly identifying persons of interest, surfacing trends, and detecting networks or fraud, saving hundreds of man hours in manual analysis. HSI will use this tool to generate leads, and further action will be required by investigators and analysts as part of the full investigative process before any action is taken against an individual. | Outputs include semantic search results, concise data summaries, extracted key entities, identified trends and patterns, and generated investigative leads from large volumes of translated and transcribed evidentiary data. | Outputs include semantic search results, concise data summaries, extracted key entities, identified trends and patterns, and generated investigative leads from large volumes of translated and transcribed evidentiary data. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-208 | Policy Analyst Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This use case intends to solve the problem of the significant manual effort it takes SEVP staff to research and respond to regulatory and policy questions related to foreign students in the F and M classes of admission, as well as schools that are either certified or seeking certification to enroll those students. | SEVP is developing an AI-enabled solution to help policy analysts and other SEVP staff quickly find and summarize information about regulations and guidance for foreign students and SEVP-certified schools. This enhanced capability reduces the time required for manual research, enabling SEVP staff to focus on more complex policy and guidance issues. It also ensures consistent and accurate responses across SEVP functions, improving overall efficiency and effectiveness in supporting foreign students and schools. | Generated outputs provide an initial analysis of applicable material, which analysts refine, modify, and review as part of their process. | Generated outputs provide an initial analysis of applicable material, which analysts refine, modify, and review as part of their process. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2402 | SEVP Response Center Chatbot - SID (SEVIS Interactive Dialog) | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This use case intends to solve the problem of handling a high volume of routine inquiries from students and officials, which can overwhelm human agents and delay response times. | AI provides SID the ability to understand voices and its deterministic question and answer workflow (1) enables SID to answer routine caller questions without a help desk agent, and (2) when a help desk agent is required, SID will create a ticket with a caller transcript to reduce the burden on the agent. This frees up the human agents to deal with more complex cases and issues with specific records. | SID answers frequently asked questions from callers. If the SID cannot answer a caller’s question, it turns the caller over to an agent in the response center. The chatbot captures the interaction with the caller and sends the information via an API to Student and Exchange Visitor Program Automated Management System (SEVPAMS). | SID answers frequently asked questions from callers. If the SID cannot answer a caller’s question, it turns the caller over to an agent in the response center. The chatbot captures the interaction with the caller and sends the information via an API to Student and Exchange Visitor Program Automated Management System (SEVPAMS). | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2423 | Digital Records Manager (DRM) User Assistance Chatbot | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI tool, the Digital Records Manager (DRM) User Assistance Chatbot, is designed to address the problem of efficiently searching and gathering information within the Investigative Case Management (ICM) system used by Homeland Security Investigations (HSI). Specifically, it provides immediate, on-demand assistance to investigators by allowing them to pose natural language questions about using the DRM application for case and media management. | This is an AI records search tool to help investigators more efficiently search and gather information. The Digital Records Manager (DRM) User Assistance Chatbot is intended to increase user efficiency by providing answers to commonly asked question without the need to manually refer to documentation or submit a help desk ticket. A reduction in the volume of submitted help desk tickets is expected as a result. | The outputs of the DRM User Assistance Chatbot will be natural language responses to user questions, based on a custom Knowledge Base of DRM documentation artifacts, supported by the natural language capabilities of the backing LLM. | The outputs of the DRM User Assistance Chatbot will be natural language responses to user questions, based on a custom Knowledge Base of DRM documentation artifacts, supported by the natural language capabilities of the backing LLM. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2436 | Burlington Finance Center Voice Bot | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This use case intends to solve the problem of the manual and time-consuming process of handling routine bond inquiries. | Identify and verify the caller, retrieve the status of the bond, and share the bond status with the requester and/or answer administrative FAQs. | It will use Natural Language Understanding (NLU)/Natural Language Processing (NLP) to perform voice-to-text and text-to-voice translations, giving it the ability to recognize voices and meaning. The BFC Voice Bot will be deterministic and will not use Generative AI. Language Translation Technology (LTT) will be used to translate inquiries from Spanish to English and responses from English to Spanish, if needed. | It will use Natural Language Understanding (NLU)/Natural Language Processing (NLP) to perform voice-to-text and text-to-voice translations, giving it the ability to recognize voices and meaning. The BFC Voice Bot will be deterministic and will not use Generative AI. Language Translation Technology (LTT) will be used to translate inquiries from Spanish to English and responses from English to Spanish, if needed. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2467 | Purchase Card Worksheet Analysis | a) Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the problem of reviewers having to manually check each purchase card line item against numerous laws, policies, and contract terms, which is time‑consuming and increases the risk of missed compliance issues. | The AI module is expected to help accelerate compliance checks, streamline processes, and assist in identifying potential risks, while continuously improving through a human-in-the-loop feedback system. This integration is intended to help procurement actions better align with relevant legal and policy requirements. | The outputs of the AI module will include a compliance status for each purchase line item (compliant or non-compliant) and detailed reasoning with citations to the specific policies or legal documents used in the evaluation. The Automated Purchase Card Approval System will use these AI-generated compliance statuses and explanations to help route items and flag potential issues within the approval workflow. AI outputs do not themselves approve or deny purchases. Compliant items may proceed through the workflow consistent with existing business rules, while non-compliant items will be flagged for human review. Human reviewers can access detailed reasoning for non-compliant determinations and may choose to correct requests, raise exceptions, or flag inaccuracies. | The outputs of the AI module will include a compliance status for each purchase line item (compliant or non-compliant) and detailed reasoning with citations to the specific policies or legal documents used in the evaluation. The Automated Purchase Card Approval System will use these AI-generated compliance statuses and explanations to help route items and flag potential issues within the approval workflow. AI outputs do not themselves approve or deny purchases. Compliant items may proceed through the workflow consistent with existing business rules, while non-compliant items will be flagged for human review. Human reviewers can access detailed reasoning for non-compliant determinations and may choose to correct requests, raise exceptions, or flag inaccuracies. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2513 | Natural Language Search for Legal Case Management | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is intended to solve the problem of Office of the Principal Legal Advisor (OPLA) personnel having to craft complex searches and manually review large volumes of case documents in its OPLA Case Management System to find relevant information. | The AI-enabled search capabilities will enable Office of the Principal Legal Advisor (OPLA) users to more efficiently search and extract relevant information from the OPLA Case Management System. This enhancement improves efficiency, saves time, and enables OPLA personnel to focus on higher-value tasks, ultimately supporting more effective case management. | The outputs of the AI system include generated queries and the corresponding search results. Office of the Principal Legal Advisor personnel use these outputs to review documents and records relevant to their work. The AI does not make legal judgments or case decisions; it helps users find and organize relevant documents, and attorneys remain responsible for interpreting the information and applying the law. | The outputs of the AI system include generated queries and the corresponding search results. Office of the Principal Legal Advisor personnel use these outputs to review documents and records relevant to their work. The AI does not make legal judgments or case decisions; it helps users find and organize relevant documents, and attorneys remain responsible for interpreting the information and applying the law. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2559 | Duplicate Contract Detection in Contract Tracking Application | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This use case intends to solve the problem of inefficiencies and redundancies in contract management. | The use of AI in contract management reduces the time it takes to manually inspect contracts and improves COT's ability to identify duplicative contracts. This implementation will reduce contract costs and aligns with federal procurement consolidation and cost-efficiency initiatives. | The primary outputs of the AI system are alerts and reports. When the AI detects a potential duplicate contract, it generates an alert for the user. Additionally, the AI can produce detailed reports that highlight the identified duplicates and provide relevant contract details. | The primary outputs of the AI system are alerts and reports. When the AI detects a potential duplicate contract, it generates an alert for the user. Additionally, the AI can produce detailed reports that highlight the identified duplicates and provide relevant contract details. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2581 | Semantic Search for Digital Forensics Data | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is intended to solve the problem that traditional keyword searches and manual review make it difficult to find relevant evidence across large, mixed-format digital datasets, especially when important material does not contain the exact search terms. | This tool will help Homeland Security Investigations quickly find relevant evidence within large, complex digital datasets, reducing time spent on manual review. This supports faster, more effective investigations and better use of limited investigative resources, ultimately enhancing public safety. | Depending on the type of data ingested, the AI will output a list of items, including chat messages, emails, pictures, and videos relevant to the search query. | Depending on the type of data ingested, the AI will output a list of items, including chat messages, emails, pictures, and videos relevant to the search query. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2583 | Draft Report Generation and Formatting for Investigations | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the problem of contract managers having to manually review large numbers of contract records to spot possible duplicates, which makes it easy to miss overlapping agreements and contributes to redundant spending. | The use of AI in contract management is expected to reduce the time required for manual contract review and improve the ability to identify duplicative contracts. This implementation supports efforts to reduce contract costs and aligns with federal procurement consolidation and cost-efficiency initiatives. | The solution’s AI outputs are alerts and reports. When the AI detects a potential duplicate contract, it generates an alert for the user. The system can also produce detailed reports that highlight identified duplicates and provide relevant contract details. These outputs are intended to support contract managers and other personnel in reviewing and resolving potential duplicate contracts. Users may access detailed reports to understand the nature of the duplicates and take appropriate actions, such as consolidating contracts, renegotiating terms, or canceling redundant agreements. The resolution workflow guides users through the process of addressing duplicate contract alerts. The AI component will not itself modify, consolidate, or cancel contracts; all actions are taken by personnel following existing approval and procurement processes. | The solution’s AI outputs are alerts and reports. When the AI detects a potential duplicate contract, it generates an alert for the user. The system can also produce detailed reports that highlight identified duplicates and provide relevant contract details. These outputs are intended to support contract managers and other personnel in reviewing and resolving potential duplicate contracts. Users may access detailed reports to understand the nature of the duplicates and take appropriate actions, such as consolidating contracts, renegotiating terms, or canceling redundant agreements. The resolution workflow guides users through the process of addressing duplicate contract alerts. The AI component will not itself modify, consolidate, or cancel contracts; all actions are taken by personnel following existing approval and procurement processes. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2584 | Named Entity Resolution for Investigative Data | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the problem of investigators having to manually identify and normalize entities such as names, locations, and other selectors in large volumes of investigative text data, which is slow and makes it difficult to run accurate, consistent searches. | The benefits of using this AI include increased search accuracy, enhanced data analysis capabilities, and the ability to handle domain-specific entities more effectively. | The outputs of this AI solution are the identified and extracted entities, which investigators use to refine search results and improve investigative workflows. Homeland Security Investigations may use the solution to generate leads, with further action required by investigators and analysts as part of the full investigative process before any action is taken against an individual. | The outputs of this AI solution are the identified and extracted entities, which investigators use to refine search results and improve investigative workflows. Homeland Security Investigations may use the solution to generate leads, with further action required by investigators and analysts as part of the full investigative process before any action is taken against an individual. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2594 | ICE Enterprise AI Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This use case aims to (1) address cybersecurity concerns related to ICE personnel using externally hosted commercial chatbots and (2) solve the issue of out-of-the-box LLMs lacking tailored capabilities for ICE, such as the integration of internal ICE artifacts. | By providing an ICE-owned and agency-wide solution, the ICE Enterprise AI Assistant addresses cybersecurity and privacy concerns associated with external AI tools, while enhancing employee efficiency and supporting the ICE mission. The tool advances ICE’s AI adoption and is expected to improve information access and increase productivity across the organization. | The solution’s AI outputs vary depending on the user’s request, but generally help personnel quickly access relevant data, reducing the time spent searching through internal resources. The chatbot also improves data reliability by citing its sources and tailoring responses to the specific needs of ICE personnel. Outputs are intended as a support tool and users may not rely on outputs as the principal basis for decisions or actions classified as high-impact AI under OMB guidelines. | The solution’s AI outputs vary depending on the user’s request, but generally help personnel quickly access relevant data, reducing the time spent searching through internal resources. The chatbot also improves data reliability by citing its sources and tailoring responses to the specific needs of ICE personnel. Outputs are intended as a support tool and users may not rely on outputs as the principal basis for decisions or actions classified as high-impact AI under OMB guidelines. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2603 | ICE Terminology and Data Asset Discovery Chatbot | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This use case intends to solve the problem of the time-consuming process of manually searching internal ICE resources for dataset information to create logical data definitions. | The chatbot is expected to help the Data Management Team quickly locate relevant SharePoint information and assist in reviewing and refining data element information. By reducing the time spent searching for information and creating logical data definitions, the chatbot enables personnel to focus on higher-value tasks. | The AI produces text responses to user queries and generates candidate logical table and column names, which Data Management Team personnel review and validate against source information and may incorporate into ICE’s internal data catalog. | The AI produces text responses to user queries and generates candidate logical table and column names, which Data Management Team personnel review and validate against source information and may incorporate into ICE’s internal data catalog. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2755 | AI-Assisted eDiscovery Search | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The use case addresses inefficiencies in traditional document search and retrieval processes, such as time-consuming manual searches, the complexity of crafting SQL queries, low relevance and accuracy in search results, difficulties in digesting lengthy content, and the absence of summarization tools. These challenges hinder users' ability to efficiently access and understand critical information. | The intended purpose of this AI use case is to optimize document search, retrieval, and summarization processes by enabling users to interact with data conversationally and efficiently. The benefits include improved accuracy and relevance in search results, reduced time spent analyzing large datasets, simplified access to complex information, and enhanced decision-making through concise summaries and precise outputs. | The solution’s AI outputs include precise search results and summarized content, enabling users to quickly access information, make informed decisions, and take follow-on actions without requiring technical expertise. The tool does not make legal or case outcomes decisions; it retrieves and summarizes documents, and Office of the Principal Legal Advisor personnel remain responsible for interpreting the results and applying the law. | The solution’s AI outputs include precise search results and summarized content, enabling users to quickly access information, make informed decisions, and take follow-on actions without requiring technical expertise. The tool does not make legal or case outcomes decisions; it retrieves and summarizes documents, and Office of the Principal Legal Advisor personnel remain responsible for interpreting the results and applying the law. | |||||||||||||||||||||
| Department Of Homeland Security | MGMT | DHS-2342 | JES and Appropriations Insight | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Convert plaint text statements in Congressional reports to machine-readable tasks that can be managed in Outlook, Jira, or other project tracking software. | Purpose: Converts scanned financial tables from PDFs into structured, machine-readable data while maintaining multi-year spending relationships. Benefits: Eliminates manual data entry, reduces errors, and significantly speeds up the process of consolidating historical financial data from legacy documents. | Structured tables in place of free text to provide a machine-readable dataset. | Structured tables in place of free text to provide a machine-readable dataset. | |||||||||||||||||||||
| Department Of Homeland Security | MGMT | DHS-2343 | CFO Navigator | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Quickly and accurately accessing and understanding DHS Chief Financial Officer (CFO) data. | Provides authorized staff with an intuitive, conversational interface to query and analyze DHS CFO financial data and reports. Benefits: Democratizes access to financial information, reduces time spent searching through documents, and enables quick self-service analytics without specialized database knowledge. | On-demand information retrieval via natural language processing. | On-demand information retrieval via natural language processing. | |||||||||||||||||||||
| Department Of Homeland Security | MGMT | DHS-2406 | DHS Asset Assessment Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Reduces users’ required time, effort, preexisting knowledge of which dataset has which information, and expertise with writing functional SQL. | One of DHS’s largest expenses is their people, their facilities and their assets. The value of the Asset Assessment tool is to improve the efficiency of DHS Property Manager’s analysis and planning process by expediting and automating today’s very manual assessment of facility placement and consolidation. The tool also provides enhanced views of information that were not available previously – like the prediction of facility cost over time for multiple facilities using actual utilization data inputs. The tools provides developer time and ultimately resource cost savings from removing manual SQL development steps from the analysis process. The tool ultimately reduces users’ required time, effort, preexisting knowledge of which dataset has which information, and expertise with writing functional SQL. | Natural language processing outputs with descriptions of intermediate AI logic to accomplish cost-benefit asset assessments. When outputs include location specific information (i.e. which county an office is in or which facilities are in a given state), output include mapping of geospatial information to enhance spatial analysis and improve accuracy of decision-making. | Natural language processing outputs with descriptions of intermediate AI logic to accomplish cost-benefit asset assessments. When outputs include location specific information (i.e. which county an office is in or which facilities are in a given state), output include mapping of geospatial information to enhance spatial analysis and improve accuracy of decision-making. | |||||||||||||||||||||
| Department Of Homeland Security | MGMT | DHS-418 | Spending Analysis and Budget Execution Risk (SABER) Model | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identify account at risk for over- and under-spend. | Purpose: Predicts potential budget execution issues by analyzing historical spending patterns across various Treasury accounts and classifications. Benefits: Early identification of spending anomalies allows proactive budget management and reduces the risk of under/overspending. | Warnings, flags for review and comparison via prediction and classification. | Warnings, flags for review and comparison via prediction and classification. | |||||||||||||||||||||
| Department Of Homeland Security | OHS | DHS-2418 | MiX Phenotyping | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Syndromic surveillance | Electronic medical records can provide key data for health security insights. AI/ML makes it possible to translate these records into machine-readable data, which is the first step to finding these health security insights. AI/ML will then be used to find clinical patterns across the data, such as patterns in symptoms over time and location. AI/ML will also be used to automate the process for detecting new trends (anomalies) in these clinical patterns. These health security insights can alert about potential threats, inform messaging, and provide decision support to medical and public health partners. | The main AI system outputs will include anomaly detection for emerging trends in clinical record patterns. AI/ML outputs will also be used to find these clinical patterns by clustering, classification and topic modeling. These clinical patterns will finally be output as linear reference models that are simple enough to be interpreted and guided by human clinicians using human-in-the-loop collaboration. | The main AI system outputs will include anomaly detection for emerging trends in clinical record patterns. AI/ML outputs will also be used to find these clinical patterns by clustering, classification and topic modeling. These clinical patterns will finally be output as linear reference models that are simple enough to be interpreted and guided by human clinicians using human-in-the-loop collaboration. | |||||||||||||||||||||
| Department Of Homeland Security | OHS | DHS-2419 | MiX Indicators | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Identification of emerging health threats. | Online news and other open-source media can be used to quickly detect and respond to emerging health security threats. However, it is not practical for people to read and scan all online news stories every day for key terms. AI/ML makes it possible to read, digest and organize news stories to look for key threat terms. AI/ML will also be used to predict normal trends in news stories. This makes it possible to detect when an unusual number of health security threat terms are found in the news. This use of AI/ML helps prepare for, respond to, and protect from potential health security threats. | The main AI system output will be anomaly detection which represents two elements: 1) AI/ML predictions for usual trends in news story key terms vs. 2) the actual daily number of mentions for key terms in the news. When the actual number of mentions for key terms exceeds modeled predictions, these will be detected as anomalies. | The main AI system output will be anomaly detection which represents two elements: 1) AI/ML predictions for usual trends in news story key terms vs. 2) the actual daily number of mentions for key terms in the news. When the actual number of mentions for key terms exceeds modeled predictions, these will be detected as anomalies. | |||||||||||||||||||||
| Department Of Homeland Security | OHS | DHS-2421 | One Health Threat Detection and Risk Assessment Platform (OH-TREADS) / Planner | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Early warning, Decision Making, and Situational Awareness | Visuals will be created using a broad variety of data pulled from open-source, public, and non-public sources. AI/ML will make it possible to ingest data from a variety of digital formats, and translate it to usuable information into machine-readable data. The volume of data, and lack of clear data collection standards, requires AI to help merge streams with different data ontologies while facilitating data interoperability with mis-paired data sets. Neither activity is, or can be, practically performed by people. AI/ML will be further used in Planner to generate risk and predictive scores in addtion to displaying relevant information and analyses that help analysts understand health security threats and broadly monitor the health security landscape. AI/ML is meant to provide comprehensive situational awareness for health surveillance and public health response, with target capabilities that: enable global situational awareness of current and potential health risks from a One Health perspective; assist in early warning of health threats by location, facility, and species; aid in rapid identification of health threats at population, facility, and greater geographic resolution; and support data-driven decision making to prevent, mitigate, and respond to health threats. | The main AI system outputs will be anomaly detection following the translation of information into a structured, machine-readable datasets. AI/ML is then used for risk identification and disease prediction that are overlayed on a map and displayed with other wholistic visuals. The visuals and analytics can then be combined with critical infrastructure locations, resources, or capabilities (feeral, state, local, tribal, and territorial) to aid response and decision making. | The main AI system outputs will be anomaly detection following the translation of information into a structured, machine-readable datasets. AI/ML is then used for risk identification and disease prediction that are overlayed on a map and displayed with other wholistic visuals. The visuals and analytics can then be combined with critical infrastructure locations, resources, or capabilities (feeral, state, local, tribal, and territorial) to aid response and decision making. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2397 | OTA Automated Passenger Screening Gate System | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Increase efficiency and security of Automated Passenger Screening | The purpose is to manage passenger flow while reducing human interaction by using AI to assess body positioning and automatically initiate the screening process when the passenger is in the optimum position. Provide positive control of passengers transitioning from the Non-Sterile to the Sterile side of the checkpoint via the QPS201 On-Person Screening (OPS) system, enhancing the security of the checkpoint environment and improving efficiency. | The AI system initiates an AIT scan when the passenger is in the optimum position. Depending on the results of the AIT scan the systems interlock control unit ensures that both doors do not open simultaneously, and the settings are configurable for TSA to determine system operation. The Dormakaba V60 doors are composed of glass, reach a maximum height of approximately 3.2 feet, and can be attached to the entrance and exit of a R&S QPS201. The V60 doors open automatically following the completion of a successful scan or remain closed if the scan was unsuccessful.the passenger would be routed either for additional screening or to the re-composure area to claim their accessible property and transit into the sterile area. | The AI system initiates an AIT scan when the passenger is in the optimum position. Depending on the results of the AIT scan the systems interlock control unit ensures that both doors do not open simultaneously, and the settings are configurable for TSA to determine system operation. The Dormakaba V60 doors are composed of glass, reach a maximum height of approximately 3.2 feet, and can be attached to the entrance and exit of a R&S QPS201. The V60 doors open automatically following the completion of a successful scan or remain closed if the scan was unsuccessful.the passenger would be routed either for additional screening or to the re-composure area to claim their accessible property and transit into the sterile area. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2400 | Answer Engine | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The Transportation Security Administration (TSA) faces increasing challenges in managing and analyzing large volumes of complex data, which can hinder the effectiveness and efficiency of its security operations. Without advanced tools to streamline data processing and generate actionable insights, the TSA’s ability to respond to evolving threats and optimize operational decision-making is limited. There is a critical need for a scalable solution that can enhance data management and analysis capabilities, enabling TSA personnel to make more informed, timely, and effective security decisions. | TSA aims to enhance its capabilities in managing and analyzing complex data, ultimately contributing to more effective and efficient security operations and optimizing the TSA's operational workflows and support capabilities. | This platform is anticipated to harness the power of AI to provide intelligent, context-aware responses and insights. | This platform is anticipated to harness the power of AI to provide intelligent, context-aware responses and insights. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2428 | Contract Requirement Automation | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI solution is intended to solve the problem of labor-intensive manual creation and management of procurement requirement documents at TSA. The solution provides the TSA user with contextually accurate outputs (primarily in the form of automated document generation and recommendations for procurement documentation) tailored to specific requirements. The platform also provides contextual recommendations during the document creation process to ensure completeness and compliance with procurement requirements. | Eliminates the manual burden and reduces errors in document creation, where staff previously spent excessive time writing requirements to meet specific procurement needs from scratch. Increases documentation accuracy, consistency and standardization. Improves management and verification of requirement documents. Cost savings represent another major benefit, as the platform reduces labor costs associated with manual document creation and management. By streamlining the procurement process, the agency can complete more procurement actions with existing resources, maximizing taxpayer dollars. The automated tool also reduces the time spent on repetitive tasks, allowing for better resource allocation. | The tool primarily outputs automated document generation and recommendations for procurement documentation. The platform provides contextual recommendations during the document creation process to ensure completeness and compliance with procurement requirements. The tool provides recommendations and decision support to guide users through the proper documentation structure and content requirements. The system's outputs always require human verification as part of the workflow, ensuring that all generated content is reviewed and approved by qualified personnel before being finalized in the procurement process. | The tool primarily outputs automated document generation and recommendations for procurement documentation. The platform provides contextual recommendations during the document creation process to ensure completeness and compliance with procurement requirements. The tool provides recommendations and decision support to guide users through the proper documentation structure and content requirements. The system's outputs always require human verification as part of the workflow, ensuring that all generated content is reviewed and approved by qualified personnel before being finalized in the procurement process. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2429 | TSA Case Handling Platform | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The tool saves case workers manual time to download, order, and compile reports. | The tool saves case workers manual time to download reports, order, and compile. The case workers can then use their time to work on cases rather than administrative tasks. Newton POC has the potential of streamlining collection processes related to cases, and creating custom reports from various materials during the case managers interview process, producing a centralized tool to manage and control all steps within each case. | The LLM automation capabilities include compiling, ordering, and exporting a PDF document that contains 6 key documents from the initial steps of formal complaint process, and 3 key documents at the final stages of the complaint. | The LLM automation capabilities include compiling, ordering, and exporting a PDF document that contains 6 key documents from the initial steps of formal complaint process, and 3 key documents at the final stages of the complaint. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2430 | Automated Field Data Collection | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | The proliferation of screening technologies has increased the number of field data collection events necessary to characterize system performance. Currently, TSA does not have a solution to gather operational data without deploying physical teams. Addressing this gap presents an opportunity to achieve significant field efficiencies through automation and enhance wait times communication. | The AI will analyze screening environments via CCTV footage and extract passenger processing times of various steps within the screening processes. Enabling AI to extract and visualize this data will enable TSA to make data informed decisions while testing or deploying new screening equipment, identify anomalies, establish real-world rates and standards, and reduce or eliminate TSA’s need to deploy data collection teams, resulting in real-time data collection and significantly reduced computational time of findings. | AI system outputs multiple decisions to include screening location performance, rates and standards of the end-to-end screening system, and passenger wait times. | AI system outputs multiple decisions to include screening location performance, rates and standards of the end-to-end screening system, and passenger wait times. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2431 | Plan of Day Staff Optimization | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Agentic-AI | TSA must deploy a dynamic solution capable of locally optimizing allocated employee staffing while load balancing available equipment and projected passenger throughput to ensure wait times do not exceed the threshold. | Plan of Day will automate TSA screening staff optimization. | Staffing operations models prescribing when screening lanes should be opened/closed, when/where screening staff is required to absorb operational peaks, determining optimal gender and certification ratios, recommending when to schedule overtime/shift adjustments, drafting lane rotation plans, and informing national TSA staffing requirements as prescribed optimization plans deviate as airline schedules shift. | Staffing operations models prescribing when screening lanes should be opened/closed, when/where screening staff is required to absorb operational peaks, determining optimal gender and certification ratios, recommending when to schedule overtime/shift adjustments, drafting lane rotation plans, and informing national TSA staffing requirements as prescribed optimization plans deviate as airline schedules shift. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2526 | Document Translation Service (DTS) | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Leveraging accurate cross-language translation with the latest Azure AI cloud service | The DTS application allows users to upload a document in a source language, call a Text Translator API specific to the application, and then download a translated copy of their artifacts. | Translates documents to and from 100 languages and dialects while preserving document structure and data format. (See Section 10.9 TAZ Azure Service Utilization) | Translates documents to and from 100 languages and dialects while preserving document structure and data format. (See Section 10.9 TAZ Azure Service Utilization) | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2622 | Service Now Predictive Intelligence Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This solves the issue of submitting duplicate or very similar Projects, Demands and Ideas. | This tool assists in preventing wasted time in working through Projects, Demands, or Ideas that have already been processed. | The AI outputs suggestions of similar Projects, Demands and Ideas. No decisions are made. | The AI outputs suggestions of similar Projects, Demands and Ideas. No decisions are made. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2676 | Text Extraction from Uploaded Images | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Without this capability, images are taken during maintenance but are not searchable and the information can not be used in generating details. | By making images into text, they are searchable and can ensure data quality and maintenance ticketing information quality. | The outputs are data information. Taking images and making them into text. | The outputs are data information. Taking images and making them into text. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-374 | TSA Contact Center Virtual Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI use case is designed to enhance customer service accessibility, improve operational efficiency, and provide valuable data insights to support TSA's mission and decision-making processes: 1. Limited Accessibility and Availability: The chatbot addresses the challenge of providing timely and accessible responses to public inquiries, especially outside the TSA Contact Center's operational hours. 2. Increased Demand for Customer Support: By automating responses to routine inquiries, the chatbot mitigates the strain on TSA staff caused by growing demand for customer support. 3. Resource Constraints and Fiscal Responsibility: The chatbot reduces the need for additional human resources to handle routine inquiries, thereby improving operational efficiency and fiscal responsibility. 4. Lack of Data-Driven Insights: By capturing and analyzing customer interaction data, the chatbot enables TSA to gain insights into customer needs, improve public information, and prioritize innovation and transformation efforts. 5. Consistency and Accuracy of Information: The chatbot ensures that responses to public inquiries are consistent, accurate, and aligned with the TSA's knowledge library. | The three goals for TSA's Virtual Assistant Chatbot are: 1. Enhanced Accessibility and Timeliness: By providing immediate responses both within and outside the TSA Contact Center's (TCC) operational hours, the Virtual Assistant improves ease of access for the public seeking answers to common questions. 2. Data-Driven Innovation and Transformation: The chatbot captures customer inquiries, providing valuable data to inform innovation and transformation priorities across the agency. 3. Improved Fiscal Responsibility: Leveraging automation to address increasing demand mitigates the need for additional resources, thereby enhancing fiscal responsibility. | As a predictive AI capability, the TSA's Virtual Assistant chatbot functions by correlating existing content within the TCC's knowledge library with the NLP to identify the most relevant knowledge articles for user queries. It does not generate original content. However, the system records transactional data related to customer interactions, including inputs, outputs, and topic classifications. This consistent data capture, mirroring existing email and phone channels, enables TSA to gain critical insights for customer experience improvement efforts, identify areas for public information enhancement, and understand the demand for specific services. | As a predictive AI capability, the TSA's Virtual Assistant chatbot functions by correlating existing content within the TCC's knowledge library with the NLP to identify the most relevant knowledge articles for user queries. It does not generate original content. However, the system records transactional data related to customer interactions, including inputs, outputs, and topic classifications. This consistent data capture, mirroring existing email and phone channels, enables TSA to gain critical insights for customer experience improvement efforts, identify areas for public information enhancement, and understand the demand for specific services. | |||||||||||||||||||||
| Department Of Homeland Security | USCG | DHS-2740 | Risk Management Framework (RMF) Automation | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Provides feature to help users understand complex security and privacy controls into plain language, monitoring of system/security, and generates documents such as FedRamp packages | RMF Automation will cut ATO processing time by more than half by handling repetitive tasks so compliance teams can focus on strategy and still make final decisions. It will speed up documentation and assessments, provide near real-time risk insights, and help collect and manage security evidence to demonstrate compliance. | The primary output is document generation specific to cybersecurity needs. It also can provide knowledge to user for translating and monitors/flags system breaches. | The primary output is document generation specific to cybersecurity needs. It also can provide knowledge to user for translating and monitors/flags system breaches. | |||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-2386 | Sentiment Analysis -FOD Field Offices Complaints and Reviews | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The AI solution helps USCIS Field Offices efficiently analyze and categorize large volumes of complaints by using machine learning to identify sentiment trends (positive, negative, or neutral) in public feedback. | This system indicates the positive or negative feelings people express in their feedback to the U.S. Citizenship and Immigration Services (USCIS). Survey results categorized to specify the sentiments as positive negative or neutral tone in an excel dashboard. | A graph which categorizes the data into different sentiments using databricks dashboard to see how the customer service can be improved. Does not give any recommendation or decision. | A graph which categorizes the data into different sentiments using databricks dashboard to see how the customer service can be improved. Does not give any recommendation or decision. | |||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-2599 | Private Artificial Intelligence (AI) Tech Hub (PAiTH) | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | USCIS staff currently lack access to AI tools that can accelerate routine knowledge work while maintaining the security, compliance, and privacy requirements specific to USCIS operations. Existing commercial AI solutions cannot access USCIS-specific documents and data, cannot be customized to immigration-specific workflows, and pose data security risks. Staff spend significant time on tasks that AI could assist with—such as legal research, document drafting, language translation, code generation, and regulatory compliance checks—but have no approved internal AI capability. Additionally, different organizational roles have vastly different AI assistance needs (e.g., attorneys need legal citations; developers need code; contracting officers need FAR guidance, budget officers need access to sensitive internal fiscal data), requiring a solution that adapts to the user's function rather than providing generic responses. PAiTH will also promote USCIS innovation by enabling testing of a variety of large language models and AI platforms, while informing approaches on prompt generation, cost containment and workforce literacy. | The purpose of PAiTH is to provide USCIS employees with an internal, secure AI assistant that delivers role-specific support for knowledge work tasks. The system will offer six persona-based assistants aligned to core USCIS job functions, each trained and prompted to provide relevant, accurate responses for that role's responsibilities. Intended benefits of PAiTH include: Increased Efficiency: Staff can quickly obtain research, draft language, translations, and technical guidance without manual searching through documents or external research; Role-Specific Accuracy: Persona-based responses tailored to each user's organizational function (legal, contracting, development, etc.) provide more relevant and useful outputs than generic AI; Data Security and Compliance: Internal deployment protects PII and sensitive USCIS data while maintaining compliance with federal security and privacy requirements; Controlled Access to USCIS Knowledge: AI can leverage USCIS-specific documents, policies, and data sources that are inaccessible to commercial AI tools, keeping sensitive information within USCIS boundaries; Standardization: Consistent AI-assisted workflows across the agency reduce variability in research and drafting quality; and Cost Savings: Reduces time spent on routine knowledge tasks, allowing staff to focus on complex decision-making and judgment-based work. | PAiTH will generate text-based outputs customized to the user's organizational persona: Legal Persona: Legal research summaries, statute and regulation citations (INA, CFR), case law analysis, draft legal memoranda outlines, document summaries with legal issue identification; Contracts Persona: Market research summaries, FAR/HSAR regulatory guidance, acquisition planning support, vendor comparison analyses, contract language suggestions; Language Translation Persona: Text-to-text translations between English and other languages for immigration documents and communications; Developer Persona: Code generation in various programming languages, code documentation, unit test creation, debugging suggestions, technical documentation drafts; Security Persona: Security compliance checklists, control mapping guidance, risk assessment frameworks, security documentation templates; CFO Persona: Financial research summaries, regulatory compliance guidance, budget justification drafts, data call response templates, training material summaries. All outputs will be text-based responses generated by the AI model, presented in a chat interface, and restricted to authorized USCIS personnel. Outputs will include appropriate disclaimers (e.g., "This is AI-generated research support, not legal advice" for legal persona) and accompanying policy will require human review before being used in any official decision-making, formal communications, and/or reporting. | PAiTH will generate text-based outputs customized to the user's organizational persona: Legal Persona: Legal research summaries, statute and regulation citations (INA, CFR), case law analysis, draft legal memoranda outlines, document summaries with legal issue identification; Contracts Persona: Market research summaries, FAR/HSAR regulatory guidance, acquisition planning support, vendor comparison analyses, contract language suggestions; Language Translation Persona: Text-to-text translations between English and other languages for immigration documents and communications; Developer Persona: Code generation in various programming languages, code documentation, unit test creation, debugging suggestions, technical documentation drafts; Security Persona: Security compliance checklists, control mapping guidance, risk assessment frameworks, security documentation templates; CFO Persona: Financial research summaries, regulatory compliance guidance, budget justification drafts, data call response templates, training material summaries. All outputs will be text-based responses generated by the AI model, presented in a chat interface, and restricted to authorized USCIS personnel. Outputs will include appropriate disclaimers (e.g., "This is AI-generated research support, not legal advice" for legal persona) and accompanying policy will require human review before being used in any official decision-making, formal communications, and/or reporting. | |||||||||||||||||||||
| Department Of Homeland Security | USSS | DHS-2641 | Enterprise WiFi | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Reduce the manual investigation/intervention to configure, monitor and troubleshooting issues. | Reduced time to resolve reported network issues and outages. | To automate wireless operations, improve reliability, predictability, and visibility into user experiences. Additionally, it lists the core technical features, such as spatial streams, channel bandwidth, modulation techniques, and advanced operational capabilities, ensuring clarity and relevance for technical and professional audiences. | To automate wireless operations, improve reliability, predictability, and visibility into user experiences. Additionally, it lists the core technical features, such as spatial streams, channel bandwidth, modulation techniques, and advanced operational capabilities, ensuring clarity and relevance for technical and professional audiences. | |||||||||||||||||||||
| Department Of Homeland Security | USCG | DHS-2745 | FLIR 280 HD | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | a) High-impact | High-impact | Computer Vision | Rapid detection of distressed persons in the water. | Enhanced ability to detect persons in the water to more rapidly deploy life-saving resources. | The operator views a image of the maritime domain. The AI draws boxes around suspected persons in the water to direct the operator to further investigate those anomalous detections. | 15/09/2024 | c) Developed with both contracting and in-house resources | FLIR | No | The operator views a image of the maritime domain. The AI draws boxes around suspected persons in the water to direct the operator to further investigate those anomalous detections. | A mixture of proprietary data from the vendor as well as data gathered during test events. | No | Yes | b) In-progress | There are no impacts to privacy, civil rights, or civil liberties of the public, the AI can only recognize whether or not an object is a person but cannot identify any features of that person | d) In-progress | b) Development of monitoring protocols is in-progess | Establishment of sufficient and periodic training is in-progress | a) Yes | b) Not applicable | Other | ||||||
| Department Of Homeland Security | CBP | DHS-2572 | Acoustic Signature AI for Gunshot Detection | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | b) Presumed high-impact but determined not high-impact | Not high-impact | This system detects sounds that are associated with sounds of gunfire. Only events classified as High Confidence are then sent to users as a detection. Users then review the audio of the event to determine the accuracy of the detection and to review the location of the detection on an associated map. If the audio review is deemed relevant by agent and/or system users, agents may be sent to investigate further and the message is sent to field personnel of potential gunshot activity. It may be warning of activity south of the border where notifications made enhance safety of officers/agents in the region. Any further actions taken will be based on what the agents encounter when they arrive on scene. The AI associated with this system is not a principle basis for a decision or action. | Classical/Predictive Machine Learning | The AI will be used to make confidence determinations between non gunshot acoustic activity and actual gunshot activity to reduce false alerts to agents monitoring the User Interface. | Gunshot and UAS detection notifications can be used by Agents for enhanced situational awareness in their area of operations. By utilizing this AI learning technology, they will have high confidence of alerts of a detection, with the specific type, as well as a pinpointed GPS location within 3-meters. | Agents will get a real time notification via text with a link to the location on maps as well as a link to an audio clip recording of the event. With Agent confirmation that the event was correctly identified, the AI utilizes all of the information for future cases. This real time notification grants Agents the ability to know what is happening right now and anticipate what might happen next in their environment. Increasing agent and officer safety in the geographic region. | 08/09/2025 | a) Purchased from a vendor | Invariant Corporation | No | Agents will get a real time notification via text with a link to the location on maps as well as a link to an audio clip recording of the event. With Agent confirmation that the event was correctly identified, the AI utilizes all of the information for future cases. This real time notification grants Agents the ability to know what is happening right now and anticipate what might happen next in their environment. Increasing agent and officer safety in the geographic region. | Initial training was conducted by the vendor using vendor obtained audio. Additional data was captured during deployments with other customers for further refinement. | No | No | |||||||||||||
| Department Of Homeland Security | CBP | DHS-2729 | Facial Recognition for National Security and Transnational Criminal Organizations | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI utilizes publicly available images and performs a comparative search against other open-source images to identify potential matches. Results produced by the AI can only be utilized for investigative leads and cannot be utilized as a determining factor for any enforcement action by CBP personnel. Therefore, in instances where the AI outputs assist in identifying leads, CBP personnel must then use internal CBP or USG data prior to making any law enforcement decisions or actions. The outputs of the AI do not serve as the principal basis for a high-impact decision or action. | Computer Vision | USBP works to address challenges in identifying individuals who may be associated with national security concerns or transnational criminal organizations, especially in situations where traditional identification methods are unavailable or insufficient. To support this effort, USBP utilizes facial recognition technology to generate investigative leads by providing visually similar photos of unidentified individuals. These leads serve as a starting point for analysts to conduct further investigation. No enforcement action is taken based solely on the leads generated by this tool. All potential identifications undergo thorough investigation and validation to ensure accuracy and compliance with established standards. This approach reflects USBP's commitment to responsibly leveraging technology to enhance national security and combat transnational criminal activities. | The intended purpose of this facial recognition technology is to assist USBP agents in addressing the challenges of identifying individuals who may be linked to national security threats or transnational criminal organizations. This capability enhances USBP's ability to develop investigative leads in cases where traditional identification methods are unavailable or insufficient. The benefits include improved efficiency in generating leads, enhanced support for national security and criminal investigations, and the ability to address complex threats more effectively. | The tool generates visually similar photos of individuals, which serve as preliminary leads for analysts to initiate further investigation. These outputs are not definitive identifications but are intended to assist in narrowing investigative focus. No enforcement action is permitted based solely on these leads. Every potential identification must undergo comprehensive investigation and validation by analysts to ensure accuracy and adherence to established investigative protocols. This process ensures the responsible and ethical use of the technology. | 09/10/2025 | a) Purchased from a vendor | Clearview AI | Yes | The tool generates visually similar photos of individuals, which serve as preliminary leads for analysts to initiate further investigation. These outputs are not definitive identifications but are intended to assist in narrowing investigative focus. No enforcement action is permitted based solely on these leads. Every potential identification must undergo comprehensive investigation and validation by analysts to ensure accuracy and adherence to established investigative protocols. This process ensures the responsible and ethical use of the technology. | The outputs of the AI are a "percentage match" which if above a threshold returns additional metadata from the registered person in the gallery. The AI was "tuned" by adjusting the threshold over months of testing to ensure matches that did not misidentify persons while still ensuring that various templates and pictures still returned a "real world" positive match. | Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | |||||||||||
| Department Of Homeland Security | CBP | DHS-163 | Non-Intrusive Inspection (NII) 3D Imaging Tool | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | c) Not high-impact | Not high-impact | Computer Vision | Use millimeter wave data to produce human-interpretable 3D images and cue end users to possible anomalies, helping CBP more effectively and efficiently detect contraband in imported mail at a speed that does not disrupt flow of commerce. | Utilizes AI/ML to generate high resolution, rapid imaging of objects behind occlusions; create 3D images for existing processes without significant slowdowns; and provide a novel narcotics detection capability for the inspection of packages. | Detection alerts for Items of Interest. | 01/04/2025 | c) Developed with both contracting and in-house resources | ThruWave | No | Detection alerts for Items of Interest. | Images and data of baggage inspection. | No | No | ||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2439 | Hazard Mitigation Assistance Chatbot | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | Understanding of Hazard Mitigation Assistance (HMA) programs and data analysis of HMA data. The tool will lead to a consistent and accurate understanding of the programs and contribute to a standardization of project reviews. | Leverage advanced Artificial Intelligence (AI) capabilities to address challenges faced by Hazard Mitigation Assistance (HMA) staff, enhancing productivity for day-to-day functions. | The tool utilizes the OpenAI models through Azure OpenAI to respond to prompts asked through the chatbot. | 18/08/2025 | c) Developed with both contracting and in-house resources | Ideation | Yes | The tool utilizes the OpenAI models through Azure OpenAI to respond to prompts asked through the chatbot. | Training data included all HMA policy, training, and data available on FEMA.gov. This include OpenFEMA datasets, Legal/Policy Documents (i.e., federal legal and policy references, such as nondiscrimination clauses and presidential memorandums); Regulations (i.e., regulatory texts, including sections of the Code of Federal Regulations (CFR)); and Guides and Handbooks (i.e., FEMA-issued guides that provide frameworks and instructions, such as planning handbooks and operational guides). | No | Yes | ||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2711 | Technical Resource for Mitigation Programs | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Emergency Management | Pilot | c) Not high-impact | Not high-impact | Generative AI | The FEMA Hazard Mitigation Assistance (HMA) AI solution addresses the challenge of managing complex grant processes that currently rely on manual review of thousands of applications, modifications, and closeout packages. Analysts must manually extract and reconcile data scattered across multiple nonstandardized systems (NEMIS, PARS, PDFs, spreadsheets), risking delays in obligation and closeout of grants. This fragmentation leads to inconsistent compliance determinations, increased audit risk, and inefficient use of limited staff resources. The initial document scan alone takes 45-60 human minutes per modification, with full reviews requiring 1-2 human days, significantly delaying the release of mitigation funds to communities in need. | The AI solution will deliver significant benefits to both FEMA operations and disaster-affected communities. Operationally, it will reduce document review time by 40-70%, saving 15-20 analyst hours per week to prioritize higher-value activities requiring human judgement and stakeholder interaction. The system will enhance compliance through consistent regulatory interpretation, reducing errors and improving financial calculation accuracy. The AI will identify eligibility concerns in real-time, reducing the risk of funding grants that do not align with federal laws, regulations, and executive orders._x000D_ For the public, the AI will accelerate application review, obligation and closeout of mitigation grants, enabling states, tribes, territories, and local communities to implement risk-reduction projects sooner. This faster release of funds directly enhances public safety and disaster resilience while providing a more consistent application experience across regions. | The AI system produces both machine-readable and human-readable artifacts to support grant management throughout the lifecycle. These include structured findings reports that categorize issues by scope, schedule, and budget with source citations; anomaly/discrepancy KPIs highlighting timeline gaps, invoice pattern shifts, and budget-to-scope mismatches; and compliance checklists identifying missing or non-conforming items. For documentation support, the system generates auto-drafted Requests for Information (RFIs), lock-in letters, and closeout letters with precise regulatory citations in a professional tone. It also creates CSV exports listing flagged terms and financial variances with page references. Additionally, the system provides on-demand answers to regulatory questions and prioritized worklists showing grants needing immediate action, supporting knowledge democratization and workflow optimization. | 11/07/2025 | b) Developed in-house | Yes | The AI system produces both machine-readable and human-readable artifacts to support grant management throughout the lifecycle. These include structured findings reports that categorize issues by scope, schedule, and budget with source citations; anomaly/discrepancy KPIs highlighting timeline gaps, invoice pattern shifts, and budget-to-scope mismatches; and compliance checklists identifying missing or non-conforming items. For documentation support, the system generates auto-drafted Requests for Information (RFIs), lock-in letters, and closeout letters with precise regulatory citations in a professional tone. It also creates CSV exports listing flagged terms and financial variances with page references. Additionally, the system provides on-demand answers to regulatory questions and prioritized worklists showing grants needing immediate action, supporting knowledge democratization and workflow optimization. | The FEMA model is trained, fine-tuned, and evaluated using comprehensive datasets of historical grant management records, including subaward closeout documentation, financial reconciliation data, and management cost lock-in records from previous disaster declarations across HMGP, PDM, FMA, and BRIC programs. | No | Yes | |||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2727 | Large Language Model (LLM) Guided Data Dictionary Generation | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | This LLM-generated data dictionary eases the burden of metadata documentation on the data stewards when integrating their data into FEMADex by creating field definitions. | By utilizing this LLM-generated data dictionary, it automatically creates field definitions, saving data stewards significant time and effort during data onboarding. | This LLM model utilizes Retrieval Augmented Generation (RAG) technique for data dictionary generation by first retrieving relevant provided metadata from 1) the source system intake form and 2) an acronym key. The LLM then uses this context to generate clear, brief dscriptions for each field. | 01/09/2025 | b) Developed in-house | No | This LLM model utilizes Retrieval Augmented Generation (RAG) technique for data dictionary generation by first retrieving relevant provided metadata from 1) the source system intake form and 2) an acronym key. The LLM then uses this context to generate clear, brief dscriptions for each field. | This model utilizes metadata documentation provided by datastewards, known as the Source System Intake Form (SSIF), during source system intake for FEMADex. Additionally an acronym key for each source system is also used and provided by the respective data stewards. The SSIF and acronym key are unique for each source system. | No | Yes | |||||||||||||||
| Department Of Homeland Security | MGMT | DHS-2454 | LIGER Generative AI Toolkit | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | LIGER® for FPS will enable FPS users to employ the power of a Large Language Model (LLM) against non-public and sensitive Agency documents to save time and effort conducting time-intensive tasks, such as draft documents to include: Position Descriptions (PD), Statement of Work (SOW) for contracting actions, Professional emails and workforce announcements, Public Affairs stories and releases, and Law Enforcement operations orders; Summarization of large documents; Proofreading and providing feedback or suggestions on written work; conducting Policy analysis to include: Policy comparison and compliance verification, building textual process maps based on policy, and identifying contradictory or outdated policy; Budget forecasting and spend plan analysis; Assisting with code generation, review, and debugging; machine language translation; brainstorming ideas of projects or processes; and information/data retrieval from document libraries. The system will also be used to automate manual processes related to generation of templated documents. | FPS personnel spend considerable time creating documents from scratch or manually searching large volumes of documents to retrieve information, identify responsibilities, and find discrepancies or outdated passages within policies. Use of LIGER® is expected to substantially reduce time and associated costs in generating new draft documents such as Statements of Work, Law Enforcement operations orders, updated Position Descriptions, and other documents using previous similar examples, reviewing large volumes of information for outdated policy or discrepancies with new DHS policy or Executive Orders, and summarizing large single documents or document collections. LIGER’s ability to securely handle sensitive information and return responses based on custom document collections offers advantages through the ability to securely handle Controlled Unclassified Information (CUI) such as LES and FOUO data which is not suitable for other GenAI applications. Additionally, because LIGER® cites sources, users can rapidly find where specific information from the generated narrative can be found in source documents. | LIGER® uses Natural Language Processing (NLP) to return an easily readable text narrative. Like all GenAI, the text response should be verified for accuracy. | 26/08/2025 | a) Purchased from a vendor | LMI Consulting, LLC | No | LIGER® uses Natural Language Processing (NLP) to return an easily readable text narrative. Like all GenAI, the text response should be verified for accuracy. | LIGER currently uses the ChatGPT-4o large language model provided through OpenAI's service on the DHS Enterprise Cloud - Azure. LIGER itself is not "trained" by the data provided for use with the application and it does not provide user data to LMI or any external entity to fine-tune the components that comprise LIGER. Unlike ChatGPT, LIGER does not have "persistent memory" of inputs provided across different "chat" strings. | Yes | Yes | ||||||||||||||
| Department Of Homeland Security | CBP | DHS-185 | Babel | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Babel utilizes AI modules for text detection and translation as well as object and image recognition to provide analysts with possible matches to manually review in a single interface, versus doing multiple manual queries. The output is not singly used for action or decision making. | CBP uses this tool to conduct targeted queries to aid CBP in open source research to monitor potential threats or dangers or identify travelers who may be subject to further inspection for violation of laws CBP is authorized to enforce or administer. | Babel utilizes AI modules for Text detection and translation as well as object and image recognition to provide analysts with possible matches to manually review in a single interface versus doing multiple manual queries. The output is not singly used for action or decision making and are used to identify additional Open Source or Social Media of a person or identify additional selectors (such as phone and emails) that are previously unknown to CBP and compared by an analyst against Government systems to identify additional derogatory information. These factors can often eliminate the traveler from additional screening. | 29/08/2023 | c) Developed with both contracting and in-house resources | Babel Street | Yes | Babel utilizes AI modules for Text detection and translation as well as object and image recognition to provide analysts with possible matches to manually review in a single interface versus doing multiple manual queries. The output is not singly used for action or decision making and are used to identify additional Open Source or Social Media of a person or identify additional selectors (such as phone and emails) that are previously unknown to CBP and compared by an analyst against Government systems to identify additional derogatory information. These factors can often eliminate the traveler from additional screening. | Babel uses proprietary data, public datasets, and machine-labeled datasets to train its NLP and matching models. Evaluation data includes human-annotated datasets, precision, recall, and F1 score assessments, and customer-provided labeled name pairs for tuning. All returned results are carefully reviewed, and no sensitive CBP-owned data is involved in the process | Yes | https://www.dhs.gov/publication/dhscbppia-058-publicly-available-social-media-monitoring-and-situational-awareness | No | a) Yes | https://www.dhs.gov/publication/dhscbppia-058-publicly-available-social-media-monitoring-and-situational-awareness | Potential for inaccurate translation or translation missing context due to local variant dialects or slang that are not captured by the vendor which can be mitigated through coordination with CBP employees who are from that country/area, foreign language certified, and can be consulted for clarification. | d) In-progress | b) Development of monitoring protocols is in-progess | Establishment of sufficient and periodic training is in-progress | c) In-progress | a) Yes, an appropriate appeal process has been established | Other | ||||
| Department Of Homeland Security | CBP | DHS-2380 | Passive Body Scanner | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Identify anomalies in body heat, assisting CBP officers to detect concealed weapons and contraband, allowing for efficient processing of travelers | PBS is intended to enhance situational awareness in pedestrian traveler processing to aid CBP officers in observing potentially dangerous objects or contraband in a timely manner and pursuant to CBP’s border search authority. | This algorithm highlights areas on a person where potential objects may be blocking the subject's expected body heat and displays these areas on live video image, monitored by a CBP officer. The highlighted areas may show the locations of carried objects, which could be potential weapons or contraband. | 29/09/2023 | c) Developed with both contracting and in-house resources | ThruVision TAC 16 | Yes | This algorithm highlights areas on a person where potential objects may be blocking the subject's expected body heat and displays these areas on live video image, monitored by a CBP officer. The highlighted areas may show the locations of carried objects, which could be potential weapons or contraband. | All data used in training, validation, test and evaluation of the AI is Thruvision proprietary – no data from any external sources (including the Agency) is used. Approximately 25,000 images have been used for training the DynamicDETECT model. These training images were extracted from staged screening events recorded with Thruvision cameras using various actors, concealment items, item locations and clothing. | No | https://www.dhs.gov/sites/default/files/publications/privacy-pia-cbp-017a-niisystemsprogrampedestriandetectionatrange-october2021.pdf | Yes | a) Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-cbp-017a-niisystemsprogrampedestriandetectionatrange-october2021.pdf | Negligible impacts to traveler safety - PBS uses passive means (thermal imaging, no radiation emitted) to “look’ for contraband or weapons on a traveler. If the PBS operator sees a weapon (either with or without the PBS), they will seek supervisor approval to conduct a pat down and initiate a secondary referral. If they see contraband, they may direct the traveler to stop and notify a supervisor, who will decide if the image or other factors meet the threshold to conduct a pat down. | d) In-progress | b) Development of monitoring protocols is in-progess | Establishment of sufficient and periodic training is in-progress | c) In-progress | b) Not applicable | Other | ||||
| Department Of Homeland Security | CBP | DHS-2388 | CBP Translate | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Assist officers and agents with immediate interpretation needs when human translators are not available. | CBP Translate enhances efficiency by expediting questioning when immediate interpretation is needed. It ensures clear communication, minimizes misunderstandings, and offers immediate accessibility via mobile and web platforms. This improves operational flexibility and creates a smoother experience for travelers. | The outputs of CBP Translate include translated text or audio in the form of chat bubbles, which store each interaction. Additionally, CBPOs can capture images of non-travel documents for text translation, but images of actual travel documents are not taken. | 07/08/2019 | c) Developed with both contracting and in-house resources | Aneesh Technologies, 24X7, Ellumen Inc., Deloitte, NiyamIT | Yes | The outputs of CBP Translate include translated text or audio in the form of chat bubbles, which store each interaction. Additionally, CBPOs can capture images of non-travel documents for text translation, but images of actual travel documents are not taken. | The models are trained using examples of translated sentences and documents, which are typically collected from the public web. A data miner that focuses more on precision than recall is used, which allows the collection of higher quality training data from the public web. | Yes | https://www.dhs.gov/publication/dhscbppia-069-cbp-translate-application | Yes | a) Yes | https://www.dhs.gov/publication/dhscbppia-069-cbp-translate-application | The key risks would be the programs inability to accurately translate what was spoken by both sides of the conversation, leading to significant delays in emergency response situations when trying to leverage traditional phone based translation services in areas with limited cell phone reception. Inaccuracy may also lead to longer processing times at Ports of Entry. These were identified via feedback from the end-users and a common understanding regarding LLM language translation models. | d) In-progress | b) Development of monitoring protocols is in-progess | Yes, sufficient and periodic training has been established | b) Not applicable | b) Not applicable | Direct usability testing | ||||
| Department Of Homeland Security | CBP | DHS-2389 | Passenger Security Assessment Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | This model aids CBP to efficiently identify security risks, especially related to narcotics interdiction, by providing real-time risk assessments that overcome limitations of traditional, time-intensive methods. The model solves the problem by providing CBP personnel with real-time risk assessments and actionable recommendations integrated into existing systems. By analyzing data not typically accessible during initial processing, the model enhances the ability to detect smuggling indicators and prioritize high-risk individuals or vehicles for further inspection. This improves the efficiency and effectiveness of border security operations, enabling CBP to better safeguard the nation while maintaining the flow of legitimate travel and trade. | This model is designed to support CBP personnel in quickly recognizing crossings that may warrant additional scrutiny, thereby enhancing border security and safety. | The outputs include risk assessments and recommendations, which are integrated into existing passenger processing and threat targeting systems, such as the Automated Targeting System (ATS). These notifications equip CBP personnel with actionable insights to address potential security concerns in real-time. | 01/04/2013 | b) Developed in-house | Yes | The outputs include risk assessments and recommendations, which are integrated into existing passenger processing and threat targeting systems, such as the Automated Targeting System (ATS). These notifications equip CBP personnel with actionable insights to address potential security concerns in real-time. | This model leverages data housed within the Automated Targeting System (ATS) Unified Passenger (UPAX). | Yes | https://www.dhs.gov/publication/automated-targeting-system-ats-update https://www.govinfo.gov/content/pkg/FR-2012-05-22/html/2012-12396.htm | Sex/Gender, Age | Yes | a) Yes | https://www.dhs.gov/publication/automated-targeting-system-ats-update https://www.govinfo.gov/content/pkg/FR-2012-05-22/html/2012-12396.htm | Risks include false positives and negatives, which could result in delays for travelers, failure to detect narcotics smuggling, or missed detections; algorithmic bias may disproportionately target certain types of travelers and crossing behaviors (related to model training using historical seizures); and ongoing challenge of traffickers adapting their methods to evade detection. These risks have been identified through research, real-world applications, and expert analyses. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Other | ||||
| Department Of Homeland Security | CBP | DHS-2390 | Cargo Security Assessment Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | This use case addresses the challenge of efficiently identifying and mitigating risks associated with cargo shipments entering the United States. With the high volume of shipments processed daily at ports of entry, it is essential to detect potentially high-risk shipments, such as those that may pose security threats, without causing delays to legitimate trade and commerce. This use case uses advanced data analytics and machine learning to enhance the ability to evaluate and prioritize shipments for further review, ensuring that flagged cargo is inspected appropriately while maintaining efficient cargo processing operations. | AI/ML Models identify high risk shipments to aid CBP officers in detecting narcotics smuggling threats, identifying candidate shipments for review and referral for inspection at CBP Ports of Entry (POEs). | High risk model results are returned to users as a system rule hit. These rule hits are viewable in the associated system results window. From this window, CBP operational personnel review and assess result for next action, including possible shipment examination. | 01/12/2011 | b) Developed in-house | Yes | High risk model results are returned to users as a system rule hit. These rule hits are viewable in the associated system results window. From this window, CBP operational personnel review and assess result for next action, including possible shipment examination. | This model leverages data provided by carriers within the Automated Commercial Environment (ACE), as well as transformations of that data within the Automated Targeting System (ATS). | Yes | https://www.dhs.gov/publication/automated-targeting-system-ats-update https://www.govinfo.gov/content/pkg/FR-2012-05-22/html/2012-12396.htm | Yes | a) Yes | https://www.dhs.gov/publication/automated-targeting-system-ats-update https://www.govinfo.gov/content/pkg/FR-2012-05-22/html/2012-12396.htm | Risks include false positives and negatives, which could lead to unnecessary inspections or missed detections; bias in algorithms that may disproportionately target certain importers; and ongoing challenge of traffickers adapting their methods to evade detection. These risks have been identified through research, real-world applications, and expert analyses. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | d) Law, operational limitations, or governmentwide guidance precludes an opportunity for an individual to appeal | Other | |||||
| Department Of Homeland Security | CBP | DHS-2391 | Illicit Trade | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The use case is designed to improve the identification and prioritization of high-risk inbound cargo shipments that may violate trade regulations. Using advanced AI and machine learning models, the system enhances risk assessment processes, helping CBP personnel more effectively detect suspicious shipments and potential compliance issues. By analyzing historical data, risk attributes, and employing predictive modeling, the AI supports CBP in streamlining enforcement actions and improving the accuracy of targeting shipments for additional review and screening. This approach helps optimize resource allocation and strengthens CBP's ability to enforce trade regulations efficiently. | The model identifies high-risk shipments to support CBP personnel in managing their workload associated with detecting threats and selecting candidate shipments for review and additional screening. | The model results are sent to the Automated Targeting System for review and assessment by operational personnel, who may conduct additional screening if necessary. | 25/07/2023 | b) Developed in-house | Yes | The model results are sent to the Automated Targeting System for review and assessment by operational personnel, who may conduct additional screening if necessary. | This model leverages data provided by carriers within the Automated Commercial Environment (ACE), as well as transformations of that data within the Automated Targeting System (ATS). | Yes | https://www.dhs.gov/publication/automated-targeting-system-ats-update https://www.govinfo.gov/content/pkg/FR-2012-05-22/html/2012-12396.htm | Yes | a) Yes | https://www.dhs.gov/publication/automated-targeting-system-ats-update https://www.govinfo.gov/content/pkg/FR-2012-05-22/html/2012-12396.htm | Risks include false positives and negatives, which could lead to unnecessary inspections or missed detections; and bias in algorithms that may disproportionally target certain importers. These risks have been identified through research, real-world applications, and expert analyses. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Other | |||||
| Department Of Homeland Security | CBP | DHS-2412 | Supervised Traveler Identity Verification Services (Officer Initiated) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Biometric Entry processing fulfills a Congressional mandate. | The TVS Biometric matching service is a cloud-based facial biometric matching service that enables CBP to match a passenger’s identity against a trusted source, throughout the travel continuum which improves traveler facilitation and reduces manual identity verification | Leverages DHS facial matching technologies to provide a match or no match response | 01/09/2017 | c) Developed with both contracting and in-house resources | CBP procured mobile devices (Apple, Samsung) commercial off the shelf cameras (Logitech) | Yes | Leverages DHS facial matching technologies to provide a match or no match response | Border Crossing Information. | Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service; https://www.federalregister.gov/documents/2016/12/13/2016-29898/privacy-act-of-1974-department-of-homeland-securityus-customs-and-border-protection-007-border | Yes | a) Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service; https://www.federalregister.gov/documents/2016/12/13/2016-29898/privacy-act-of-1974-department-of-homeland-securityus-customs-and-border-protection-007-border | The key risk is the degradation of the TVS verification to degrade overtime based on the parameters of assessment for comparing images to templates. The facial recognition does not enter or retrieve data, it is only comparative. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | General solicitations of feedback and comments from the public, Direct usability testing | ||||
| Department Of Homeland Security | CBP | DHS-2413 | Semi-Supervised Traveler Identity Verification Services (Traveler Initiated) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Biometrically processes travelers on entry. | The TVS Biometric matching service is a cloud-based facial biometric matching service that enables CBP to match a passenger’s identity against a trusted source, throughout the travel continuum which improves traveler facilitation and reduces manual identity verification. | Leverages TVS facial matching technologies to provide a match or no match response | 01/02/2024 | c) Developed with both contracting and in-house resources | NEC | Yes | Leverages TVS facial matching technologies to provide a match or no match response | Trusted Traveler Information. | Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | Yes | a) Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | The key risk is the degradation of the TVS verification to degrade overtime based on the parameters of assessment for comparing images to templates. The facial recognition does not enter or retrieve data, it is only comparative. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | General solicitations of feedback and comments from the public, Direct usability testing | ||||
| Department Of Homeland Security | CBP | DHS-2414 | 3rd Party Traveler Identity Verification Services | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Biometric Exit processing fulfills a Congressional mandate. | TVS is a cloud-based facial biometric matching service that enables CBP, External Partners, and Other Government Agencies (OGA) to match a passenger’s identity against a trusted source, throughout the travel continuum, which improves traveler facilitation and reduces manual identity verification. | Leverages DHS facial matching technologies to provide a match or no match response. | 01/05/2017 | c) Developed with both contracting and in-house resources | NEC | Yes | Leverages DHS facial matching technologies to provide a match or no match response. | Border Crossing Information. | Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | Yes | a) Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | The key risk is the degradation of the TVS verification to degrade overtime based on the parameters of assessment for comparing images to templates. The facial recognition does not enter or retrieve data, it is only comparative. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | General solicitations of feedback and comments from the public, Direct usability testing | ||||
| Department Of Homeland Security | CBP | DHS-2415 | Traveler Self-Service Mobile Identity Verification | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The use of the Traveler Verification Service (TVS) in these use cases enables biometric identity verification and facilitates travel. | The TVS Biometric matching service is a cloud-based facial biometric matching service that enables CBP to match a passenger’s identity against a trusted source, throughout the travel continuum which improves traveler facilitation and reduces manual identity verification | Leverages DHS facial matching technologies to provide a match or no match response. | 01/01/2022 | c) Developed with both contracting and in-house resources | NEC (algorithm only) | Yes | Leverages DHS facial matching technologies to provide a match or no match response. | Vetting/Border Crossing Information/ Trusted Traveler Information | Yes | https://www.dhs.gov/publication/electronic-system-travel-authorization ; https://www.dhs.gov/publication/dhscbppia-051-automated-passport-control-apc-and-mobile-passport-control-mpc ; https://www.dhs.gov/publication/global-enrollment-system-ges | Yes | a) Yes | https://www.dhs.gov/publication/electronic-system-travel-authorization ; https://www.dhs.gov/publication/dhscbppia-051-automated-passport-control-apc-and-mobile-passport-control-mpc ; https://www.dhs.gov/publication/global-enrollment-system-ges | The key risk is the degradation of the TVS verification to degrade overtime based on the parameters of assessment for comparing images to templates. The facial recognition does not enter or retrieve data, it is only comparative. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | a) Yes, an appropriate appeal process has been established | General solicitations of feedback and comments from the public | ||||
| Department Of Homeland Security | CBP | DHS-2416 | Traveler Identity Verification Services (Vetting) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The use of the Traveler Verification Service (TVS) enables CBP to enhance the identification of possible threats by leveraging facial recognition technology to identify biometric matches to derogatory records that are not identified through existing biographic targeting and entity resolution mechanisms. | CBP's Traveler Identity Verification Services (Vetting) utilizes facial recognition technology to enhance threat identification by matching travelers' biometrics against records of concern. | When the system identifies a potential match to concerning records, CBP personnel conducts a manual facial comparison to determine whether the record is likely associated with the individual. | 01/12/2018 | c) Developed with both contracting and in-house resources | NEC | Yes | When the system identifies a potential match to concerning records, CBP personnel conducts a manual facial comparison to determine whether the record is likely associated with the individual. | Border Crossing Information. | Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | Yes | a) Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | The key risk is the degradation of the TVS verification to degrade overtime based on the parameters of assessment for comparing images to templates. The facial recognition does not enter or retrieve data, it is only comparative. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Direct usability testing | ||||
| Department Of Homeland Security | CBP | DHS-2538 | Open Source and Social Media Analysis | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | The AI is intended to solve the problem of efficiently identifying potential threats and admissibility concerns by quickly analyzing vast amounts of open-source and social media data for security risks to enhance U.S. national security. This tool then presents information to a CBP Officer/analyst for manual review, verification and validation for violations of Title 8 and Title 19 or other laws that CBP is sworn to enforce. The output is not used as the sole basis for action or decision making. | CBP uses this tool to conduct targeted queries to aid CBP in open-source research to monitor potential threats or dangers or identify travelers who may be subject to further inspection for violation of laws CBP is authorized to enforce or administer. | This tool utilizes AI modules for Text detection and translation as well as object and image recognition to provide analysts with possible matches to manually review in a single interface versus doing multiple manual queries. The output is not solely used for action or decision making and are used to identify additional Open Source or Social Media of a person or identify additional selectors (such as phone and emails) that are previously unknown to CBP and compared by an analyst against Government systems to identify additional derogatory information. | 01/01/2025 | a) Purchased from a vendor | NexisXplore | No | This tool utilizes AI modules for Text detection and translation as well as object and image recognition to provide analysts with possible matches to manually review in a single interface versus doing multiple manual queries. The output is not solely used for action or decision making and are used to identify additional Open Source or Social Media of a person or identify additional selectors (such as phone and emails) that are previously unknown to CBP and compared by an analyst against Government systems to identify additional derogatory information. | Training data was collected from several publicly available, social media, and media outlet sites. This approach ensured the model was trained across several different groups representing an array of possible language types and vernaculars so as not to cause bias toward a specific demographic. Along with the above open-source data, the vendor leverages a mix of proprietary data to ensure the data is representative of real-world conditions and context. | No | https://www.dhs.gov/publication/dhscbppia-058-publicly-available-social-media-monitoring-and-situational-awareness | No | b) In-progress | https://www.dhs.gov/publication/dhscbppia-058-publicly-available-social-media-monitoring-and-situational-awareness | AI could potentially mis-label an object however all results are reviewed by a law enforcement officer and OSINT results are only one section of data among many when reviewing admissibility. | d) In-progress | b) Development of monitoring protocols is in-progess | Establishment of sufficient and periodic training is in-progress | b) Not applicable | a) Yes, an appropriate appeal process has been established | In-progress | ||||
| Department Of Homeland Security | CBP | DHS-2561 | Cryptocurrency Analysis | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Identification of transactions that may have been made with a designated entity, company, or location that may be using crypto to circumvent financial reporting. This tool will help CBP identify business and travelers who may be using cryptocurrency to conceal illicit transactions not reported within the U.S. financial system. | Quicker identification of risks related to cryptocurrency accounts to aid in addressing potential admissibility concerns (in lieu of completely manual research of the same accounts). | Highlighting transactions (labeling them as at-risk) that may have been made with a designated entity, company, or location that may be using crypto to circumvent financial reporting. Use of dark web marketplaces, illicit funding sites, or other financial activity that would normally be flagged by a bank or other financial institution in accordance with US law. | 01/01/2025 | a) Purchased from a vendor | TRM Labs | No | Highlighting transactions (labeling them as at-risk) that may have been made with a designated entity, company, or location that may be using crypto to circumvent financial reporting. Use of dark web marketplaces, illicit funding sites, or other financial activity that would normally be flagged by a bank or other financial institution in accordance with US law. | Training data was collected from several publicly available, social media, and media outlet sites. This approach ensured the model was trained across several different groups representing an array of possible language types and vernaculars so as not to cause bias toward a specific demographic. Along with the above open-source data, the vendor leverages a mix of proprietary data to ensure the data is representative of real-world conditions and context. | No | https://www.dhs.gov/publication/dhscbppia-058-publicly-available-social-media-monitoring-and-situational-awareness | No | b) In-progress | https://www.dhs.gov/publication/dhscbppia-058-publicly-available-social-media-monitoring-and-situational-awareness | Identification of US Government crypto wallets that are being utilized for criminal investigations which are participating in transactions on illicit marketplaces. | d) In-progress | b) Development of monitoring protocols is in-progess | Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | In-progress | ||||
| Department Of Homeland Security | CBP | DHS-2570 | Traffic Jam | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Access to a robust data source of potential runaways, missing children, and sex trafficking victims alongside the associated analytics tools to support CBP in our mission to protect the most vulnerable populations. | Traffic Jam leverages AI for facial recognition purposes and to draw together similar data points from a large pool of data in support of human trafficking and exploitation investigations. Manually combing through available data takes an extensive amount of time and resources, with often less than desirable outcomes. The AI solution in Traffic Jam is able to quickly identify the most relevant information and possible matching for immediate review, allowing officers and analysts to focus limited time in a high-tempo operational environment in the most effective manner possible. | The AI system is only used for the purposes of returning possible matches to query criteria within the Traffic Jam system. The data submitted by the end user is not used to train or refine the AI model, submitted images are cached for 2 hours to facilitate user support but nothing is retained permanently in the Traffic Jam database. Additionally, the information is reported back to CBP officers and agents to review for further research and determination of next steps. | 29/09/2025 | a) Purchased from a vendor | Marinus Analytics | Yes | The AI system is only used for the purposes of returning possible matches to query criteria within the Traffic Jam system. The data submitted by the end user is not used to train or refine the AI model, submitted images are cached for 2 hours to facilitate user support but nothing is retained permanently in the Traffic Jam database. Additionally, the information is reported back to CBP officers and agents to review for further research and determination of next steps. | Proprietary, public, and machine labeled datasets including structured and unstructured data such as online ads, research datasets, images, and geospatial information. All data used for model training and testing is either publicly available, lawfully obtained, or provided by partner agencies under appropriate agreements, and is processed in compliance with privacy and security standards. | Yes | No | a) Yes | Using Traffic Jam or similar commercial facial recognition vendors introduces additional privacy and civil liberties risks, particularly around data control, transparency, and accountability. Mitigation steps are taken as outlined in the Privacy Impact Assessment to minimize or eliminate the impacts of using artificial intelligence. | d) In-progress | b) Development of monitoring protocols is in-progess | Establishment of sufficient and periodic training is in-progress | c) In-progress | a) Yes, an appropriate appeal process has been established | Other | ||||||
| Department Of Homeland Security | CBP | DHS-2619 | CBP Link | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | CBP Link utilizes liveness to ensure the photo that TVS it utilizing to conduct facial matching to CBP holdings is of a live person and not a photo of a 2-D image and well as validate the presence of a specific user when collecting geolocation to validate that person is in the required location. | CBP Link uses liveness detection and TVS uses facial recognition to compare live or uploaded images with CBP's database, enabling real-time identity verification. This automation streamlines border processes, enhances accuracy, and reduces fraud. | CBP Link outputs include identity match confirmation, fraud alerts, and traveler status updates for clearance in processes like boarding or border crossing. | 16/06/2025 | c) Developed with both contracting and in-house resources | IProov | Yes | CBP Link outputs include identity match confirmation, fraud alerts, and traveler status updates for clearance in processes like boarding or border crossing. | CBP Link submission information. | Yes | https://www.dhs.gov/sites/default/files/2025-06/25_0611_priv_pia-cbp-083-cbplink.pdf | Yes | a) Yes | https://www.dhs.gov/sites/default/files/2025-06/25_0611_priv_pia-cbp-083-cbplink.pdf | False negative facial match or liveness detection - potential impact of a user not being able to provide proof of departure. As an alternative, the user can provide proof of departure utilizing any of the means as described here: https://i94.cbp.dhs.gov/home | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Direct usability testing | ||||
| Department Of Homeland Security | CBP | DHS-2669 | Land Border Integration | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The Land Border Integration system addresses the need for efficient and accurate data capture and processing during vehicle inspections at land-border Ports of Entry (PoEs). The system is designed to detect and interpret alphanumeric values from license plates captured by cameras, integrating with License Plate Readers (LPR) to classify license plate numbers, their country, and state of origin. Additionally, it leverages facial recognition technologies to analyze vehicle occupants, enhancing situational awareness for border officers. By utilizing artificial intelligence (AI) technologies and edge-based processing, the system minimizes reliance on centralized systems and enables real-time video stream analysis. This ensures timely and actionable insights, allowing officers to vet travelers more efficiently without manually entering information. The solution supports critical operations by improving the speed and accuracy of data processing, enhancing operational effectiveness, and streamlining the inspection process. | The intended purpose of the Land Border Integration system is to enhance the efficiency and accuracy of data processing during vehicle inspections at land-border Ports of Entry (PoEs). By leveraging artificial intelligence (AI) technologies, the system captures and interprets visual data, including license plate information, vehicle classification (make, model, and color), and occupant identification through facial recognition, in real time. This edge-based AI solution supports situational awareness and operational decision-making by providing timely and actionable insights to officers. It minimizes reliance on centralized systems, streamlines the vetting process, and reduces the need for manual data entry, enabling officers to focus on critical tasks. The system improves operational effectiveness, enhances border security, and facilitates efficient traveler processing. | The Land Border Integration system generates alphanumeric values extracted from license plates, along with additional outputs such as the detected vehicle's make, model, color, and license plate origin (country and state). These outputs are ingested and processed to support law enforcement activities during cross-border inspections. The system also provides facial recognition data to analyze vehicle occupants, further enhancing situational awareness. These outputs are presented to booth officers, who can accept or correct the AI-generated information. By delivering actionable data in real time, the system improves operational efficiency, streamlines the inspection process, and supports informed decision-making during border security operations. | 12/04/2021 | a) Purchased from a vendor | Rekor | Yes | The Land Border Integration system generates alphanumeric values extracted from license plates, along with additional outputs such as the detected vehicle's make, model, color, and license plate origin (country and state). These outputs are ingested and processed to support law enforcement activities during cross-border inspections. The system also provides facial recognition data to analyze vehicle occupants, further enhancing situational awareness. These outputs are presented to booth officers, who can accept or correct the AI-generated information. By delivering actionable data in real time, the system improves operational efficiency, streamlines the inspection process, and supports informed decision-making during border security operations. | In support of production performance metrics for device health purposes, LBI uses both license plate and RFID read metrics to evaluate device health states. For example, identifying breakage of devices when there are missed LPR or RFID reads. In the future, LBI intends to use license plate reads from production Ports of Entry (POE) to support training. | Yes | No | a) Yes | Potential impact to an individual or entity's civil liberties or privacy. | b) Yes – by an agency AI oversight board not directly involved in the AI’s development | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Other | ||||||
| Department Of Homeland Security | CBP | DHS-2731 | Mobile Fortify | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The "Mobile Fortify" application utilizes CBP's facial comparison or DHS's fingerprint matching to quickly verify subjects of interest during operations. | Utilizing facial comparison or fingerprint matching services, agents/officers in the field are able to quickly verify identity utilizing trusted source photos. | The mobile application will display either a no-match indicator or match with biographic information back to the agent/officer. | 01/05/2025 | a) Purchased from a vendor | NEC | Yes | The mobile application will display either a no-match indicator or match with biographic information back to the agent/officer. | Vetting/Border Crossing Information/ Trusted Traveler Information | Yes | Yes | a) Yes | The key risk is the degradation of CBP's facial comparison verification to degrade overtime based on the parameters of assessment for comparing images to templates. The facial recognition does not enter or retrieve data, it is only comparative. | d) In-progress | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Other, Direct usability testing | ||||||
| Department Of Homeland Security | CBP | DHS-315 | ERNIE | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | ERNIE is used to analyze Radiation Portal Monitor (RPM) data to enhance the detection of radioactive materials. It provides a more efficient review of stream of commerce radiation portal monitor data and provides real-time risk assessments and alerts for potential threats | The model enhances threat detection and prioritizes high-risk targets, improving operational efficiency and national security. | The model provides real-time risk assessments and alerts for potential threats detected by the Radiation Portal Monitors. It also provides prioritized recommendations for further screening based on the analysis of radiation data. | 01/10/2017 | c) Developed with both contracting and in-house resources | Countering Weapons of Mass Destruction, Department of Homeand Security | Yes | The model provides real-time risk assessments and alerts for potential threats detected by the Radiation Portal Monitors. It also provides prioritized recommendations for further screening based on the analysis of radiation data. | Numerical data from RPM radiation detectors and ERNIE assessments. | No | Yes | a) Yes | Safety risk is when ERNIE would not identify a radiation threat. In case ERNIE cannot take decision the system falls back to the default deterministic algorithm. Were there to be a pervasive failure of the system such that no indication was provided at all, the procedure is in place for the officer to perform a manual scan. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Other | ||||||
| Department Of Homeland Security | CBP | DHS-398 | Unified Processing/Mobile Intake | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The need to use every available resource to identify individuals who may pose a threat to national security or may be members of terrorism or transnational criminal organizations. | The objective is to facilitate the swift and accurate biometric identification of individuals encountered by CBP, thereby expediting their processing. | CBP utilizes facial matching technologies to verify identity. This process compares an individual's live photo against existing government photo holdings to confirm identity. | 01/03/2022 | c) Developed with both contracting and in-house resources | NEC (Nippon Electric Company) | Yes | CBP utilizes facial matching technologies to verify identity. This process compares an individual's live photo against existing government photo holdings to confirm identity. | Border Crossing Information | Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | Yes | a) Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | The key risk is the degradation of the TVS verification to degrade overtime based on the parameters of assessment for comparing images to templates. The facial recognition does not enter or retrieve data, it is only comparative. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Direct usability testing, Other | ||||
| Department Of Homeland Security | CBP | DHS-80 | Traveler Entity Resolution | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Traveler Entity Resolution AI/ML models aim to improve both security and operational efficiency by focusing on individuals who may present higher risks, by improving the certainty of traveler record matches to assist CBP personnel in identifying suspicious travelers for follow-on action. | To enhance the efficiency and effectiveness of screening passengers for potential security risks. The AI model assess traveler data such as travel patterns and historical records assisting CBP personnel to prioritize higher-risk individuals for further screening, streamlining the vetting process, allowing CBP personnel to focus resources on the most high-risk travelers, thereby improving border security while reducing the burden of manual screening. | The outputs are integrated into the Automated Targeting System (ATS), which generates notifications to recommend further inspection or follow-up actions. These recommendations assist CBP personnel in making real-time decisions about which travelers to prioritized for further screening. CBP personnel retain the final authority in the decision-making process, ensuring that human judgment remains central to border security operations. | 01/12/2012 | b) Developed in-house | Yes | The outputs are integrated into the Automated Targeting System (ATS), which generates notifications to recommend further inspection or follow-up actions. These recommendations assist CBP personnel in making real-time decisions about which travelers to prioritized for further screening. CBP personnel retain the final authority in the decision-making process, ensuring that human judgment remains central to border security operations. | This model leverages data housed within the Automated Targeting System (ATS) Unified Passenger (UPAX). | Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | Sex/Gender, Age | Yes | a) Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | Risks include false positives and negatives, which could result in delays for travelers, failure to detect narcotics smuggling, or missed detection; algorithmic bias may disproportionately target certain types of travelers and crossing behaviors (related to model training using historical seizures); and ongoing challenge of traffickers adapting their methods to evade detection. These risks have been identified through research, real-world applications, and expert analyses. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | d) Law, operational limitations, or governmentwide guidance precludes an opportunity for an individual to appeal | Other | ||||
| Department Of Homeland Security | DHS | DHS-365 | Consular Consolidated Database (CCD) Facial Recognition (FR) On Demand Report (VISA Only) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | a) High-impact | High-impact | Computer Vision | On-demand facial recognition services. | DHS Components use the Facial Recognition (FR) on Demand report (Visa only) to combat fraud by benefit applicants whose fingerprints are not in IDENT but who may have photos in the Department of State's (DoS) Consular Consolidated Database (CCD) that predate the fingerprinting of visa applicants. | Facial Recognition and other biometric checks and reports. | 01/04/2019 | a) Purchased from a vendor | Department of State | No | Facial Recognition and other biometric checks and reports. | Information stored within the Consular Consolidated Database. | Yes | Race/Ethnicity, Sex/Gender, Age | No | b) In-progress | Potential mismatch of face images and/or bias based on demographic data held by DoS. | d) In-progress | b) Development of monitoring protocols is in-progess | Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | In-progress | |||||
| Department Of Homeland Security | ICE | DHS-2408 | Hurricane Score | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | This use case intends to solve the problem of understanding which noncitizens in Enforcement and Removal Operations’ Alternatives to Detention - Intensive Supervision Appearance Program are most likely to abscond. | The Hurricane Score helps officers quickly evaluate substantial amounts of case information across thousands of Alternatives to Detention - Intensive Supervision Appearance Program participants. By surfacing a risk indicator based on observed absconding patterns, it can provide additional insight that might not be apparent from manual review alone. This supports more consistent and efficient case reviews and helps officers allocate case management resources more effectively while maintaining individualized assessments. | Once individuals are enrolled in Enforcement and Removal Operations’ Alternatives to Detention - Intensive Supervision Appearance Program (ATD-ISAP), officers periodically review each case to determine whether the current level of case management and technology assignment remains appropriate or should be adjusted. During case reviews, an analyst or officer provides the Hurricane Score model with information already known about an ATD-ISAP participant, including case management details and participant actions. The model is a quasi-binomial, binary classification machine learning (ML) model trained on inactive ATD-ISAP case data to identify patterns associated with prior absconding behavior. Based on the provided inputs, the model outputs a score from 1 to 5, with higher scores indicating a higher model-estimated risk that the individual may abscond. Officers may then consider this score, along with many other factors, when determining whether current levels of case management or technology assignment remain appropriate or should be adjusted. | 01/02/2019 | b) Developed in-house | No | Once individuals are enrolled in Enforcement and Removal Operations’ Alternatives to Detention - Intensive Supervision Appearance Program (ATD-ISAP), officers periodically review each case to determine whether the current level of case management and technology assignment remains appropriate or should be adjusted. During case reviews, an analyst or officer provides the Hurricane Score model with information already known about an ATD-ISAP participant, including case management details and participant actions. The model is a quasi-binomial, binary classification machine learning (ML) model trained on inactive ATD-ISAP case data to identify patterns associated with prior absconding behavior. Based on the provided inputs, the model outputs a score from 1 to 5, with higher scores indicating a higher model-estimated risk that the individual may abscond. Officers may then consider this score, along with many other factors, when determining whether current levels of case management or technology assignment remain appropriate or should be adjusted. | Inactive case data from individuals enrolled in the ATD-ISAP program. | Yes | https://www.dhs.gov/sites/default/files/2023-08/privacy-pia-ice062-atd-august2023.pdf | Sex/Gender, Age | Yes | a) Yes | https://www.dhs.gov/sites/default/files/2023-08/privacy-pia-ice062-atd-august2023.pdf | Predictive ML techniques can produce misleading results, such as false positives, which could impact case management decisions if relied upon as a primary factor. For instance, an inaccurate hurricane score might lead to stricter or more lenient compliance or technology requirements for an individual. ERO mitigates this by using the score as one of many factors in determining case management or technology levels for individuals in the ATD program. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | b) Not applicable | d) Law, operational limitations, or governmentwide guidance precludes an opportunity for an individual to appeal | Direct usability testing | ||||
| Department Of Homeland Security | ICE | DHS-2457 | Facial Recognition for Locating Vulnerable Populations | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The use case addresses the challenge of identifying and locating members of vulnerable populations, such as unaccompanied minors who have crossed the border, whose identities and locations are unknown to law enforcement. | The facial recognition service reduces the time personnel spend manually searching for images online and helps them discover potentially relevant photographs or profiles that they might not otherwise find. This can improve the speed and effectiveness of efforts to identify and locate vulnerable individuals and support appropriate protective or assistance measures. | Investigators submit facial photos obtained through lawful means to an AI-enabled facial recognition service. The service compares these photos to publicly available online images to find visually similar faces. It returns possible matches, along with references to the public sources where those images were found, so personnel can review them in context. These results are treated as leads that may help identify a person or their associates, but they are not confirmations on their own. | 23/01/2025 | a) Purchased from a vendor | Law Enforcement Sensitive (LES) | Yes | Investigators submit facial photos obtained through lawful means to an AI-enabled facial recognition service. The service compares these photos to publicly available online images to find visually similar faces. It returns possible matches, along with references to the public sources where those images were found, so personnel can review them in context. These results are treated as leads that may help identify a person or their associates, but they are not confirmations on their own. | Law Enforcement Sensitive (LES) | Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-frs-054-may2020.pdf | No | a) Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-frs-054-may2020.pdf | The AI-enabled facial recognition service may return too many candidates, resulting in the collection of irrelevant personal information. Mitigation: The service only returns candidates meeting a set confidence score threshold, ranking results by highest confidence. Potential matches are used as investigative leads and require full validation through the investigative process. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | b) Not applicable | b) Not applicable | Other | ||||
| Department Of Homeland Security | ICE | DHS-2458 | Facial Recognition for National Security Investigations | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | This use case addresses the challenge of identifying individuals of interest in authorized national security investigations. | The facial recognition service helps investigators reduce the time needed to manually search for images and associated information online. By surfacing potentially relevant images that might otherwise be missed, it can improve the speed and effectiveness of national security investigations and allow investigators to focus their efforts on analysis, corroboration, and case-building. | Investigators submit facial photos obtained through lawful investigative means to an AI-enabled facial recognition service. The service compares these photos to publicly available online images to find visually similar faces. It returns candidate matches and links or references to the public sources where those images appear so investigators can review them in context and evaluate whether they may be relevant to a case. | 23/01/2025 | a) Purchased from a vendor | Law Enforcement Sensitive (LES) | Yes | Investigators submit facial photos obtained through lawful investigative means to an AI-enabled facial recognition service. The service compares these photos to publicly available online images to find visually similar faces. It returns candidate matches and links or references to the public sources where those images appear so investigators can review them in context and evaluate whether they may be relevant to a case. | Law Enforcement Sensitive (LES) | Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-frs-054-may2020.pdf | No | a) Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-frs-054-may2020.pdf | The AI-enabled facial recognition service may return too many candidates, resulting in the collection of irrelevant personal information. Mitigation: The service only returns candidates meeting a set confidence score threshold, ranking results by highest confidence. Potential matches are used as investigative leads and require full validation through the investigative process. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | b) Not applicable | b) Not applicable | Other | ||||
| Department Of Homeland Security | ICE | DHS-2459 | Facial Recognition for Investigations of Transnational Criminal Organizations | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The problem this use case solves is the challenge of identifying unknown individuals involved in transnational criminal activities, such as violent crimes, drug trafficking, human smuggling, and financial fraud. | The facial recognition service reduces the time investigators spend manually searching for images online and helps them discover potentially relevant photographs or profiles that they might not otherwise find. This can improve the speed and effectiveness of investigations into complex transnational criminal networks while allowing investigators to focus on analysis, corroboration, and case-building. | Investigators submit facial photos obtained through lawful investigative means to an AI-enabled facial recognition service. The service compares these photos to publicly available online images to find visually similar faces. It returns possible matches, along with links or references to the public sources where those images were found, so investigators can review them in context. These results are treated as investigative leads that may point to potential identities or locations, but they do not constitute confirmation on their own. | 23/01/2025 | a) Purchased from a vendor | Law Enforcement Sensitive (LES) | Yes | Investigators submit facial photos obtained through lawful investigative means to an AI-enabled facial recognition service. The service compares these photos to publicly available online images to find visually similar faces. It returns possible matches, along with links or references to the public sources where those images were found, so investigators can review them in context. These results are treated as investigative leads that may point to potential identities or locations, but they do not constitute confirmation on their own. | Law Enforcement Sensitive (LES) | Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-frs-054-may2020.pdf | No | a) Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-frs-054-may2020.pdf | The AI-enabled facial recognition service may return too many candidates, resulting in the collection of irrelevant personal information. Mitigation: The service only returns candidates meeting a set confidence score threshold, ranking results by highest confidence. Potential matches are used as investigative leads and require full validation through the investigative process. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | b) Not applicable | b) Not applicable | Other | ||||
| Department Of Homeland Security | ICE | DHS-2556 | AI-Assisted Resume Screening Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | a) High-impact | High-impact | Generative AI | This use case intends to solve the problem of human bias during resume reviews and the time-intensive process of reviewing candidate resumes. | This solution applies the same review criteria to every candidate’s resume. This reduces human cognitive bias and variability in how HR specialists evaluate candidate resumes. Additionally, this solution speeds up the time-to-hire by reducing the amount of time spent conducting manual candidate resume reviews. | The evaluation model compares each resume against the associated job requirements and provides a numerical score, a scoring group (red, yellow, green, or blue), related experience, and missing experience. The scoring group categorizes candidates based on the percentage of matching experience, with red indicating a weak candidate, yellow indicating moderate alignment, green indicating a strong candidate, and blue indicating that the system was unable to score the resume due to issues such as missing documents. | 01/01/2026 | c) Developed with both contracting and in-house resources | AIS | No | The evaluation model compares each resume against the associated job requirements and provides a numerical score, a scoring group (red, yellow, green, or blue), related experience, and missing experience. The scoring group categorizes candidates based on the percentage of matching experience, with red indicating a weak candidate, yellow indicating moderate alignment, green indicating a strong candidate, and blue indicating that the system was unable to score the resume due to issues such as missing documents. | OpenAI's GPT-4 is trained on common crawl and publicly available data. ICE does not provide any training data and uses the pre-trained base models as is. Pre-trained models used do not require training data. Human-evaluated resumes are compared to tool output for validation. Production data will be candidate resumes. | Yes | https://www.dhs.gov/sites/default/files/2025-03/25_0331_priv_pia-dhs-all-043a-talentacquisition-appendix-update.pdf | No | b) In-progress | https://www.dhs.gov/sites/default/files/2025-03/25_0331_priv_pia-dhs-all-043a-talentacquisition-appendix-update.pdf | In-Progress - potential impacts will be identified during AI Impact Assessment. | d) In-progress | b) Development of monitoring protocols is in-progess | Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | Establishment of an appropriate appeal process is in-progress | ||||
| Department Of Homeland Security | ICE | DHS-2577 | Mobile Fortify | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The AI is intended to solve the problem of confirming individuals’ identities in the field when officers and agents must work with limited information and access multiple disparate systems to identify individuals and retrieve existing data relevant to enforcement, investigations, and victim protection activities. | The use of AI in this process increases the speed and efficiency of identifying individuals and organizing identity information, supporting immigration enforcement, authorized investigations, and victim protection efforts. | Mobile Fortify runs on a mobile device and can capture facial images, contactless fingerprints, and photographs of identity documents. The application transmits this data to U.S. Customs and Border Protection (CBP) for submission to government biometric matching systems. Those systems use AI-based matching techniques, including facial recognition and fingerprint matching, to compare the captured data against existing records and return possible matches with associated biographic information. The tool also uses optical character recognition to extract text from identity documents to support additional checks. ICE does not own or interact directly with the AI models that perform biometric matching or optical character recognition. CBP owns and operates these models, and Mobile Fortify simply displays the results to ICE users. For additional details on the AI models that support the application, see CBP’s Mobile Fortify AI use case. | 20/05/2025 | c) Developed with both contracting and in-house resources | NEC is the third‑party vendor CBP uses. ICE accesses these capabilities through CBP and does not contract directly with NEC. | Yes | Mobile Fortify runs on a mobile device and can capture facial images, contactless fingerprints, and photographs of identity documents. The application transmits this data to U.S. Customs and Border Protection (CBP) for submission to government biometric matching systems. Those systems use AI-based matching techniques, including facial recognition and fingerprint matching, to compare the captured data against existing records and return possible matches with associated biographic information. The tool also uses optical character recognition to extract text from identity documents to support additional checks. ICE does not own or interact directly with the AI models that perform biometric matching or optical character recognition. CBP owns and operates these models, and Mobile Fortify simply displays the results to ICE users. For additional details on the AI models that support the application, see CBP’s Mobile Fortify AI use case. | ICE does not own and did not train, test, or evaluate the AI models that power the Mobile Fortify application. See CBP’s Mobile Fortify AI use case for details on the application’s underlying AI models. | Yes | Yes | b) In-progress | In-Progress - potential impacts will be identified during AI Impact Assessment. | d) In-progress | b) Development of monitoring protocols is in-progess | b) Development of monitoring protocols is in-progess | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | In-progress | ||||||
| Department Of Homeland Security | ICE | DHS-2666 | License Plate Capture and Analysis | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The AI is intended to solve the problem of time-consuming manual reviews of license plate images and data, which makes it challenging for investigators to identify relevant vehicle movements and patterns. | The AI capabilities reduce the need for manual review of large numbers of license plate images and logs. By streamlining plate reading and providing flexible search and summarization tools, the system helps investigators more quickly identify potentially relevant vehicle movements and patterns that might otherwise be missed, thereby improving the efficiency and effectiveness of investigative work. | The system processes images and metadata from ICE-owned and commercial license plate recognition cameras. It uses computer vision and optical character recognition to detect and read license plates and to capture associated information such as time, location, vehicle make and model, color, and visible characteristics like damage or signage. An integrated natural language interface powered by a large language model allows users to ask questions in everyday language, such as requesting detections of a particular plate or vehicle description over a period of time. The system converts these questions into structured database queries and returns relevant records, along with concise text summaries of vehicle movements. The system’s AI-enabled outputs are machine-read license plate numbers with associated time, location, and vehicle metadata, as well as natural language search results and summaries produced by the LLM interface. The LLM translates user questions into structured searches over the LPR data and summarizes relevant vehicle detections into concise descriptions of vehicles and their sightings. While license plate information can be used as a link to other personally identifiable information, the LPR system does not automatically link license plate records to driver or vehicle registration databases. Any such queries must be conducted separately in accordance with applicable laws and policies. | 19/09/2025 | a) Purchased from a vendor | Motorola | No | The system processes images and metadata from ICE-owned and commercial license plate recognition cameras. It uses computer vision and optical character recognition to detect and read license plates and to capture associated information such as time, location, vehicle make and model, color, and visible characteristics like damage or signage. An integrated natural language interface powered by a large language model allows users to ask questions in everyday language, such as requesting detections of a particular plate or vehicle description over a period of time. The system converts these questions into structured database queries and returns relevant records, along with concise text summaries of vehicle movements. The system’s AI-enabled outputs are machine-read license plate numbers with associated time, location, and vehicle metadata, as well as natural language search results and summaries produced by the LLM interface. The LLM translates user questions into structured searches over the LPR data and summarizes relevant vehicle detections into concise descriptions of vehicles and their sightings. While license plate information can be used as a link to other personally identifiable information, the LPR system does not automatically link license plate records to driver or vehicle registration databases. Any such queries must be conducted separately in accordance with applicable laws and policies. | The vendor trained its LPR system using a combination of real-world traffic camera footage, synthetic plate images, and public datasets containing diverse license plate formats from various regions. The models are optimized for high accuracy in different lighting, weather, and motion conditions, and are fine-tuned using data from deployments across cities and agencies. | Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-lpr-january2018.pdf | No | b) In-progress | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-lpr-january2018.pdf | In-Progress - potential impacts will be identified during AI Impact Assessment. | d) In-progress | b) Development of monitoring protocols is in-progess | b) Development of monitoring protocols is in-progess | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | In-progress | ||||
| Department Of Homeland Security | ICE | DHS-362 | Facial Recognition for Investigations of Child Sexual Exploitation and Abuse | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | This use case intends to solve the problem of identifying unknown victims and offenders depicted in child sexual abuse material. | The tool helps more quickly identify previously unknown victims and offenders who might not be discovered through manual investigative methods alone. By highlighting potentially relevant photographs or profiles across publicly available online images, it can accelerate victim identification and rescue efforts and support the disruption and prosecution of offenders who might otherwise remain undetected. | Homeland Security Investigations Child Exploitation Investigations Unit personnel submit newly discovered and unidentified child sexual abuse material images obtained through lawful investigative means to an AI-enabled facial recognition service. The service compares these photos to publicly available online images to find visually similar faces. It returns possible matches, along with links or references to the public sources where those images were found, so investigators can review them in context. These results are treated as investigative leads that may point to potential identities or locations, but they do not constitute confirmation on their own. | 01/12/2020 | a) Purchased from a vendor | Law Enforcement Sensitive (LES) | Yes | Homeland Security Investigations Child Exploitation Investigations Unit personnel submit newly discovered and unidentified child sexual abuse material images obtained through lawful investigative means to an AI-enabled facial recognition service. The service compares these photos to publicly available online images to find visually similar faces. It returns possible matches, along with links or references to the public sources where those images were found, so investigators can review them in context. These results are treated as investigative leads that may point to potential identities or locations, but they do not constitute confirmation on their own. | Law Enforcement Sensitive (LES) | Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-frs-054-may2020.pdf | No | a) Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-frs-054-may2020.pdf | The AI-enabled facial recognition service may return too many candidates, resulting in the collection of irrelevant personal information. Mitigation: The service only returns candidates meeting a set confidence score threshold, ranking results by highest confidence. Potential matches are used as investigative leads and require full validation through the investigative process. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | b) Not applicable | b) Not applicable | Other | ||||
| Department Of Homeland Security | TSA | DHS-135 | Low Probability of False Alarm (Low-Pfa) Algorithm for on-person screening. | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | a) High-impact | High-impact | Computer Vision | Increase passenger throughput via improve detection performance and decreasing alarm rates and passengers touch rates by 50% | The purpose is to reduce alarm rates while providing increased passenger throughput and experience. Utilizes Machine Learning (ML) to improve detection performance while decreasing alarm rates and passengers touch rates. The algorithm is gender agnostic which no longer requires officers to select a passengers gender prior to being scanned. Advanced imaging technology (AIT) throughput and AIT utilization have increased with this new algorithm. Note: Once the algorithm is trained, it is locked down and no longer learning. | The AI outputs target coordinates to the operator viewing station which is viewed as a bounding box on a representative human figure. | 12/12/2022 | a) Purchased from a vendor | Leidos, Rohde & Schwarz | No | The AI outputs target coordinates to the operator viewing station which is viewed as a bounding box on a representative human figure. | Vendor AITs are tested in a laboratory environment using mock passengers. Statistical tests are done for false and true alarms, a performance measure for detection. Statistical tests are done for probability of false alarm and probability of detection, a performance measure for detection capability. | No | https://www.dhs.gov/sites/default/files/publications/privacy-tsa-pia-32-d-ait.pdf | Yes | a) Yes | https://www.dhs.gov/sites/default/files/publications/privacy-tsa-pia-32-d-ait.pdf | Security risks include false negatives allowing threats to get throught the sterile side of the airport or high false alarm rates slowing opperations. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | a) Yes, an appropriate appeal process has been established | General solicitations of feedback and comments from the public, Direct usability testing | ||||
| Department Of Homeland Security | TSA | DHS-327 | Credential Authentication Technology with Camera System (CAT-2) and AutoCAT (CAT-2 in an e-gate form factor) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Improving the detection of imposters. | The Transportation Security Administration (TSA) uses AI-based, one-to-one (1:1) and one-to-many (1:n) facial matching technologies at some checkpoints to assist human reviewers with traveler identity verification. The purpose and expected benefits of the technology include increased speed and accuracy of identity verification at the checkpoint while improving detection of imposters. | The system produces a recommendation to the Transportation Security Officer (TSO) to indicate if person presenting the identity document is similar to the face on the photo ID document. In the event of a non-match, the TSO is responsible for additional identity verification steps to verify the identity of the traveler. | 01/09/2023 | a) Purchased from a vendor | IDEMIA Identity, Security USA LLC | Yes | The system produces a recommendation to the Transportation Security Officer (TSO) to indicate if person presenting the identity document is similar to the face on the photo ID document. In the event of a non-match, the TSO is responsible for additional identity verification steps to verify the identity of the traveler. | During the development, the original equipment manufacturer trained the technology using their own data for 1:1 facial comparison. Prior to initial deployment, DHS S&T conducted evaluation of the biometrics algorithms using volunteers for facial matching validation. During TSA's continuous evaluation, a photo is taken of the passenger and compared to the photo on the identification to determine whether it was an actual match to the individual. | Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-tsa046b-tdc-june2020.pdf | No | a) Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-tsa046b-tdc-june2020.pdf | In the event of a non-match, the traveler may make a second attempt ot the TSA perform additional identity verification steps to verify the identity of the traveler. This process may add between 20 seconds or a few minutes to the identity verification and security screening process. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | a) Yes, an appropriate appeal process has been established | Direct usability testing | ||||
| Department Of Homeland Security | TSA | DHS-345 | PreCheck Touchless Identity Solution | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Assist human reviewers with traveler identity verification. | TSA is using Facial Comparison to verify a passenger’s identity at its security checkpoints locations using the CBP Traveler Verification Service (TVS). This process streamlines passenger identity verification, increasing the speed of security checks while maintaining a high degree of safety for all passengers and crewmembers. | TSA is leveraging CBP's TVS system technology as an optional process for passengers traveling via certain airports who wish to further expedite their TSA PreCheck or crew member ID verification process. This additional TSA PreCheck feature is voluntary, and passengers may opt-out of the process at any time and instead choose the standard identity verification by a Transportation Security Officer (TSO). Crew members that wish to opt-out will be sent to the security checkpoint to process through screening. | 01/10/2018 | c) Developed with both contracting and in-house resources | CBP TVS, NEC Algorithm | Yes | TSA is leveraging CBP's TVS system technology as an optional process for passengers traveling via certain airports who wish to further expedite their TSA PreCheck or crew member ID verification process. This additional TSA PreCheck feature is voluntary, and passengers may opt-out of the process at any time and instead choose the standard identity verification by a Transportation Security Officer (TSO). Crew members that wish to opt-out will be sent to the security checkpoint to process through screening. | Data includes images captured during prior CBP inspections, U.S. passport and visa records, immigration records, and photographs from DHS encounters. TSA evaluates the matching score through a quality assurance process to compare the “ground truth” data from the passenger identification information against the determination made by the algorithm. The passenger information captured during quality assurance is not retained by TSA. | Yes | https://www.dhs.gov/sites/default/files/2023-11/23_1128_priv_pia_tsa_046d_tdc.pdf | Yes | a) Yes | https://www.dhs.gov/sites/default/files/2023-11/23_1128_priv_pia_tsa_046d_tdc.pdf | The key risk is the degradation of the TVS verification to degrade overtime based on the parameters of assessment for comparing images to templates. The facial recognition does not enter or retrieve data, it is only comparative. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | General solicitations of feedback and comments from the public, Direct usability testing | ||||
| Department Of Homeland Security | USCIS | DHS-130 | Text Analytics Data Science Sentence Similarity Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The Text Analytics capability employs machine learning and data graphing techniques to identify patterns that may indicate potential fraud, national security, and/or public safety concerns by scanning the digitized narrative sections of the associated applications and looking for common language patterns. | Text Analytics augments the tedious and time-consuming manual process to identify potential fraud, national security, and/or public safety concerns and enables the identification of such concerns across jurisdictional boundaries. It increases the integrity of immigration programs, strengthens officers’ confidence in their work, and contributes to the reduction in customer wait times. | Text Analytics does not make predictions, recommendations, or decisions. It is merely a research tool that identifies potential patterns, while remaining agnostic as to whether those patterns identify potential fraud, national security, and/or public safety concerns. Instead, trained staff evaluate the patterns to determine whether they identify potential concerns and then validate and/or invalidate those potential concerns through the course of their investigations or adjudications. | 01/11/2019 | c) Developed with both contracting and in-house resources | Inadev | Yes | Text Analytics does not make predictions, recommendations, or decisions. It is merely a research tool that identifies potential patterns, while remaining agnostic as to whether those patterns identify potential fraud, national security, and/or public safety concerns. Instead, trained staff evaluate the patterns to determine whether they identify potential concerns and then validate and/or invalidate those potential concerns through the course of their investigations or adjudications. | Text Analytics stores information extracted from benefit forms and supporting documents, focusing on the narrative portions of those documents. | Yes | https://www.dhs.gov/publication/dhsuscispia-085-pangaea-pangaea-text | Yes | a) Yes | https://www.dhs.gov/publication/dhsuscispia-085-pangaea-pangaea-text | There is small risk of false positives or false negatives due to the model. The risk is mitigated through a manual review of any information produced from the tool. Text Analytics is a decision support tool. Text Analytics does not make recommendations of fraud or benefit / adjudication decisions, any decisions made from information stored in the tool is conducted through a manual review by a USCIS employee. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | a) Yes, an appropriate appeal process has been established | Direct usability testing | ||||
| Department Of Homeland Security | USCIS | DHS-181 | Automated Realtime Global Organization Specialist (ARGOS) for Company Registration Submissions to E-Verify | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | The goal of the use case is to leverage sentiment analysis in ARGOS to streamline the process and accelerate work for the individual who is researching a company that has submitted its information for registration to E-Verify. | ARGOS sentiment analysis produces a risk score and keyword extraction identifies the keyword category of interest to the VAC MPAs (management and program analyst) for the aggregated open-source information to help quickly identify any pertinent information to aid the MPAs in their open-source investigation of company applications. This saves potentially thousands of MPA man hours in open-source investigation and creates a single source-of-truth for each MPAs investigation of a company application. This, in turn, allows for quicker application processing and, if risk of company fraud exists, much faster referral processing time quickening the next-step referral to FDNS for further investigations. | Responses back to a user dashboard accessible internally only by VAC Management and Program Analyst (MPA) personnel. Keywords relating to the MPA's work interest are extracted if present and risk scores are assigned to the open-source collected information. The data is presented to the MPA on the GUI (graphical user interface) dashboard. | 12/08/2023 | c) Developed with both contracting and in-house resources | IBM | Yes | Responses back to a user dashboard accessible internally only by VAC Management and Program Analyst (MPA) personnel. Keywords relating to the MPA's work interest are extracted if present and risk scores are assigned to the open-source collected information. The data is presented to the MPA on the GUI (graphical user interface) dashboard. | The fine-tuned dataset is collected from open-source queries from the Bing API connected to the ARGOS system. This is publicly available data that doesn't contain any PII. | No | Yes | a) Yes | Lack of Domain-Specific Accuracy: model tested on company data across different industries resulted in inconsistent performance. Limited Generalization to Unseen Data: model’s performance on validation datasets lower than on training data, indicating potential overfitting. Misinterpretation of Sentiment: instances of sarcasm/irony not recognized. All risks identified in testing and evaluation phases. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | General solicitations of feedback and comments from the public | ||||||
| Department Of Homeland Security | USCIS | DHS-2384 | Verification Match Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | By consolidating these into a single, unified Verification Match Model within a separate microservice, the use case aims to improve the accuracy of responses and reduce the need for manual review. ML plays a key role in the continuous improvement of these models, ultimately reducing the need for manual case reviews. | Leveraging AI in the USCIS verification matching process of known records across systems is beneficial because it streamlines existing USCIS review by 1) improving associated system accuracy, 2) reducing human-error by automating person-and-record match scoring, and 3) matching at a higher volume than traditional tools or manual processes can capably achieve. | A recommendation and score that indicates person-and-record match probability used by verification systems (E-verify and SAVE) to improve accuracy in initial system response | 22/05/2024 | c) Developed with both contracting and in-house resources | IBM | Yes | A recommendation and score that indicates person-and-record match probability used by verification systems (E-verify and SAVE) to improve accuracy in initial system response | Individual's Names, Dates of Birth, and Document Identifiers from USCIS sourced data contained in CIS2, C3, ELIS, and Global. These are all private datasets within USCIS. | Yes | https://www.dhs.gov/publication/dhsuscispia-030f-e-verify-mobile-app-usability-testing, https://www.dhs.gov/publication/systematic-alien-verification-entitlements-save-program | Yes | a) Yes | https://www.dhs.gov/publication/dhsuscispia-030f-e-verify-mobile-app-usability-testing, https://www.dhs.gov/publication/systematic-alien-verification-entitlements-save-program | Model-match performance in terms of accuracy, precision, and recall. Identified via model evaluation and analysis for these performance statistics. | d) In-progress | a) Yes, sufficient monitoring protocols have been established | Establishment of sufficient and periodic training is in-progress | a) Yes | a) Yes, an appropriate appeal process has been established | In-progress | ||||
| Department Of Homeland Security | USCIS | DHS-413 | I-765 - USCIS Facial Recognition through IDENT (1:1 Face Recognition/Validation) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Using the Automated Biometric Identification System (IDENT) makes this process nearly instant and greatly enhances processing efficiency without compromising the effectiveness of current identity verification methods. Additionally, new rules created by the FBI’s Compact Council require biometric verification to perform fingerprint resubmissions for different filing reasons than the original fingerprint capture. Facial verification brings USCIS into compliance with this rule change. Furthermore, performing facial verification increases the integrity of information and identities by identifying conflicts early on, preventing issues from becoming pervasive across immigration systems. | This will allow the user to complete the biometric verification requirement without having to attend an appointment at am Applicant Support Center. This reduces the burden on the beneficiary as well as reducing demands on USCIS Applicant Service Center resources. | Match or no match response from IDENT. | 12/11/2024 | c) Developed with both contracting and in-house resources | Pluribus Digital | Yes | Match or no match response from IDENT. | The Office of Biometric Identity Management (OBIM) conducts manual testing and evaluation of its fingerprint, latent print, iris, and facial comparison algorithms. This process relies on carefully curated datasets, expert human analysis, and mathematical assessment. | Yes | https://www.dhs.gov/sites/default/files/2024-11/24_0930_priv_pia-dhs-uscis-cpms-060d.pdf | Yes | a) Yes | https://www.dhs.gov/sites/default/files/2024-11/24_0930_priv_pia-dhs-uscis-cpms-060d.pdf | Potential mismatch of face images and/or bias based on demographic data held by USCIS | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | a) Yes, an appropriate appeal process has been established | General solicitations of feedback and comments from the public, Other | ||||
| Department Of Homeland Security | USCIS | DHS-55 | Person-Centric Identity Services Deduplication Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Critical to the success of PCIS is the entity resolution and de-deduplication of individual records from various systems of records to create a complete picture of a person. Using machine learning (ML), the model can identify which case management records belong to the same unique individual with a high degree of confidence. This allows PCIS to compile a full immigration history for an individual without the need for time-consuming research across multiple disparate systems. The de-duplication model plays a critical role in the entity resolution and surfacing of a person and all their associated records. The ML models are more resilient to fuzzy matches and handle varying data fill rates more reliably. | Using Machine Learning allows us to improve entity resolution as compared to rule based system. PCIS offers the ability to see a person's immigration history organized in one place. Specific benefits do or will include: an organized summary view of the identity with the individual's latest photo from PCIS; full immigration history including receipts associated with the applicant, regardless of case management system; mailing, physical, and safe history of the individual organized in reverse chronological order, allowing users to easily find the most recent address; and all identifiers associated with the applicant, including A-Numbers, FINs, SSNs, SSNs, ELIS account numbers, passport numbers, etc. | Numerical likelihood score which is used to determine if the record belongs to the individual. Likelihood scores are subjected to a high threshold (.98, maximum 1) to assess whether the record belongs to the individual. | 01/02/2023 | c) Developed with both contracting and in-house resources | MetroIBR | Yes | Numerical likelihood score which is used to determine if the record belongs to the individual. Likelihood scores are subjected to a high threshold (.98, maximum 1) to assess whether the record belongs to the individual. | USCIS-only data derived from 7 form-processing source systems including C3, ELIS, CPMS, GLOBAL, CIS2, AR-11, CAMINO. | Yes | https://www.dhs.gov/sites/default/files/2022-12/privacy-pia-uscis-pia087-pcis-december2022.pdf | Yes | a) Yes | https://www.dhs.gov/sites/default/files/2022-12/privacy-pia-uscis-pia087-pcis-december2022.pdf | There is small risk of false positives or negatives, which are identified and sent to Manual Resolution Queue. The queue is processed by authorized and trained personnel. Human review is still done for the actual benefit or request being sought. AI is used to identify the person seeking the benefit or request. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Direct usability testing | ||||
| Department Of Homeland Security | USSS | DHS-415 | Criminal Investigations (OBIM) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The intended problem to solve is the identification of unknown victims and suspects involved in crimes that undermine the integrity of U.S. financial and payment systems. By using facial recognition and biometric image comparison, the USSS aims to efficiently and accurately identify individuals connected to criminal investigations, thereby supporting law enforcement efforts to detect, arrest, and prevent such crimes. | The intended purpose of this AI is so that USSS personnel may submit available photographs or video stills of these unknown persons as probe images (facial images or templates searched against the gallery of an FRS) to other government agencies for comparison against their image galleries. The agencies will query their image galleries of known persons and may provide lists of potential matches. They may use the potential matches to produce investigative leads which will assist in the further identification of victims or suspects. Additionally, we may request another government agency to conduct a one-to-one comparison of two photographs or video stills for investigative use. | The system will query image galleries of known persons and may provide lists of potential matches. USSS personnel may use the potential matches to produce investigative leads which will assist in the further identification of victims or suspects. | 01/01/2017 | a) Purchased from a vendor | NEC | Yes | The system will query image galleries of known persons and may provide lists of potential matches. USSS personnel may use the potential matches to produce investigative leads which will assist in the further identification of victims or suspects. | Trained on mugshot data and paid volunteers. | Yes | https://www.dhs.gov/sites/default/files/2024-09/24_0912_privacy-pia-usss033-facialrecognition_0.pdf | Yes | a) Yes | https://www.dhs.gov/sites/default/files/2024-09/24_0912_privacy-pia-usss033-facialrecognition_0.pdf | The product was developed by the NEC using AI and Deep Machine Learning to train the algorithm, however, the current NEC product that is used by OBIM in the production environment does not use AI to continue to train the NEC algorithm on production data. The fact that OBIM/NEC do not use AI on production data to continue to train the algorithm significantly limits the risks associated with the use of AI and ML on the face candidate list process. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | a) Yes, an appropriate appeal process has been established | Direct usability testing | ||||
| Department Of Homeland Security | CBP | DHS-101 | The Advanced Trade Analytics Platform (ATAP) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | International Affairs | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not produce an action or serve as a principal basis for a decision that has the potential to significantly impact the safety of human-life or well-being, climate or environment, critical infrastructure, or strategic assets or resources. Additionally, the AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. | Classical/Predictive Machine Learning | ATAP aims to provide insights into trends and behaviors in trade activity to support a proactive risk management and enforcement posture in the agency’s mission execution. | To create efficiencies and unlock key insights in CBP's trade mission execution through the application of data analytics, machine learning, and AI. | Model output is provided in dashboards and other visualization mechansims for operator assessment and action determination. | 07/03/2022 | c) Developed with both contracting and in-house resources | Elder Research Inc, DevTech Systems Inc., Guidehouse | Yes | Model output is provided in dashboards and other visualization mechansims for operator assessment and action determination. | ATAP relies on CBP source system information from CBP's ACE, ATS, and SEACATS systems, including import/export filing information, compliance reviews, targeting, seizure, and fine/penalty information. | Yes | Yes | |||||||||||||
| Department Of Homeland Security | CBP | DHS-183 | Public Information Compilation for Travel Threat Analysis (Dataminr) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not produce an action or serve as a principal basis for a decision that has the potential to significantly impact the safety of human-life or well-being, climate or environment, critical infrastructure, or strategic assets or resources. The AI output only provides the officer with complied public information. The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. | Natural Language Processing (NLP) | Provides situational awareness of open-source social media and news reporting to enhance CBP Screening, Vetting and security of the homeland. | This tool significantly reduces the amount of time it takes for users to collect and compile commercially available open-source information when attempting to identify possible threats related to national security, border violence, CBP facilities, CBP employee safety and other topics with a CBP-nexus involving air, sea, and land travel to and/or from the U.S. | The AI output is compiled publicly available information for awareness. CBP employees further research the information, including reading the source information, to determine if there is a possible threat. | 01/11/2024 | a) Purchased from a vendor | Dataminr | Yes | The AI output is compiled publicly available information for awareness. CBP employees further research the information, including reading the source information, to determine if there is a possible threat. | Training data was collected from several publicly available, social media, and media outlet sites. This approach ensured the model was trained across several different groups representing an array of possible language types and vernaculars so as not to cause bias toward a specific demographic. Along with the above open-source data, the vendor leverages a mix of proprietary data to ensure the data is representative of real-world conditions and context. | No | https://www.dhs.gov/publication/dhscbppia-058-publicly-available-social-media-monitoring-and-situational-awareness | No | https://www.dhs.gov/publication/dhscbppia-058-publicly-available-social-media-monitoring-and-situational-awareness | |||||||||||
| Department Of Homeland Security | CBP | DHS-188 | Airship Outpost for Conveyance Identification | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI is not being used for tracking or analysis. It is simply identifying the alpha-numeric values of the conveyances in front of it. | Computer Vision | CBP must efficiently and accurately identify and document cross-border conveyances (aircraft, vessels, automobiles). | Outpost uses machine learning to identify the type of conveyance in front of the sensor camera and uses this information to determine where to capture the conveyance's identification (License plate, hull number, tail number, etc). Conveyance identifiers exist in different locations on different conveyance types. By identifying the type of conveyance, the system knows where to focus the capture mission relevant information. | Identification and classification of the type of conveyance (e.g., automobile, aircraft, watercraft) including license plates, hull numbers, or tail numbers for monitoring purposes. | 01/09/2023 | a) Purchased from a vendor | Airship | Yes | Identification and classification of the type of conveyance (e.g., automobile, aircraft, watercraft) including license plates, hull numbers, or tail numbers for monitoring purposes. | The datasets that the system uses are GOTS and LES. Purchased commercial data sources also used to enhance the value of the system. The AI only identifies the type of conveyance captured by the camera and determines the location of the alphanumeric identifiers used to identify it, such as license plates, hull numbers, or tail numbers. This information, along with an image of the conveyance and the date/time, is sent back. | Yes | https://www.dhs.gov/sites/default/files/2022-05/privacy-pia-cbp-tecs%20platform-april2022.pdf | Yes | https://www.dhs.gov/sites/default/files/2022-05/privacy-pia-cbp-tecs%20platform-april2022.pdf | |||||||||||
| Department Of Homeland Security | CBP | DHS-2383 | Unmanned Aircraft Collision Avoidance | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case avoids collisions for small unmanned aircraft systems. It operates a video feed that activates the obstacle avoidance features. The obstacle avoidance capability assists the pilot on the ground to avoid colliding the unmanned aircraft with objects such as man-made structures, vehicles, trees, wires, or other objects in the projected flight path. The pilot receives a visual alert on the hand controller, indicating a possible collision and in some cases the aircraft will slow down, change direction to avoid the obstacle, or stop. | Computer Vision | The use case solves the problem of navigating complex environments autonomously while ensuring obstacle avoidance in real time. By relying on AI-based 3D scanning functions instead of GPS, the system enhances safety and precision in drone operations, reducing the risk of collisions and enabling efficient, reliable use in diverse mission scenarios. It addresses the challenge of maintaining situational awareness and operational accuracy during unmanned aircraft missions, providing pilots with visual alerts to prevent potential collisions. | The platform operates on video feed only which in turn activates the obstacle avoidance on the aircraft where the AI capabilities are housed. The system supports the streamlined intake process while maintaining the accuracy and reliability of identity verification. | The pilot of the sUAS will receive a visual alert on the hand controller, indicating a possible collision. | 01/10/2022 | a) Purchased from a vendor | Skydio and Xtender | Yes | The pilot of the sUAS will receive a visual alert on the hand controller, indicating a possible collision. | Live flight testing data of the platform in test and operational environments. | No | No | |||||||||||||
| Department Of Homeland Security | CBP | DHS-24 | Entity Resolution | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not produce an action or serve as a principal basis for a decision that has the potential to significantly impact the safety of human-life or well-being, climate or environment, critical infrastructure, or strategic assets or resources. AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. | Classical/Predictive Machine Learning | The use case leverages advanced technology to improve the analysis of global trade data and enhance its ability to identify risks within supply chains. By using AI tools to organize and analyze complex datasets, CBP can uncover patterns and relationships that may indicate unethical practices, such as forced labor. This innovative approach supports efforts to ensure compliance with trade laws, protect economic security, and promote fair and ethical trade practices. | Detection research of force labor within supply chain utilizing analytical AI platform. | Detection of potential force labor within supply chain. | 01/05/2023 | c) Developed with both contracting and in-house resources | Altana | No | Detection of potential force labor within supply chain. | Altana utilizes a combination of commercial, public, and proprietary data sources to build a searchable and traversable graph of global trade. These include bills of lading, customs declarations, and exclusive proprietary documentation from first-party logistics providers. | No | https://www.dhs.gov/sites/default/files/2023-09/23_0926_privacy-p%E2%81%AEia-cbp003c-acemodernizations.pdf | No | https://www.dhs.gov/sites/default/files/2023-09/23_0926_privacy-p%E2%81%AEia-cbp003c-acemodernizations.pdf | |||||||||||
| Department Of Homeland Security | CBP | DHS-2417 | Process Efficiency Traveler Identity for Airline Check-in and Bag Drop | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This is an option for travelers, not a requirement. If a traveler chooses to use it and the service cannot match a traveler, the traveler may continue check-in/bag drop via another means. CBP does not make any decision or action based on a no-match. | Computer Vision | These use cases facilitate identity verification leveraging TVS. | The TVS Biometric matching service is a cloud-based facial biometric matching service that enables CBP, External Partners, and Other Government Agencies (OGA) to match a passenger’s identity against a trusted source, throughout the travel continuum which improves traveler facilitation and reduces manual identity verification. | Leverages DHS facial matching technologies to provide a match or no match response. | 01/10/2018 | c) Developed with both contracting and in-house resources | NEC | Yes | Leverages DHS facial matching technologies to provide a match or no match response. | Border Crossing Information. | Yes | https://www.dhs.gov/sites/default/files/2023-11/23_1128_priv_pia_tsa_046d_tdc.pdf | Yes | https://www.dhs.gov/sites/default/files/2023-11/23_1128_priv_pia_tsa_046d_tdc.pdf | |||||||||||
| Department Of Homeland Security | CBP | DHS-310 | Customs Broker License Exam - Proctor Support | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This is not a high impact case because this would only be used in for examinees taking the Customs Broker License Exam and not used in requests for federal services, processes, and benefits to include loans and access to public housing. This is just a feature that the CBP exam vendor uses to detect cheating during the exam. | Computer Vision | Detect potential cheating during the Customs Broker License Exam. | The model supports remote proctoring of the exam and ensure the integrity of the testing process, by ensuring the exam is conducted under secure conditions, preventing cheating or fraud, while also verifying the identity of exam takers to confirm they meet the necessary requirements. | Integrity reports, identity confirmation, proctoring complicance feedback, and test results. | 31/03/2023 | c) Developed with both contracting and in-house resources | PDRI | No | Integrity reports, identity confirmation, proctoring complicance feedback, and test results. | PDRI trains its AI models for assessments by combining expert human ratings with robust data, using seasoned raters to score responses first, then training AI on these expert-validated examples, and continuously testing the AI's outputs against human judgments to ensure accuracy, fairness, and adherence to psychological testing standards. | Yes | https://www.dhs.gov/sites/default/files/2023-04/privacy-pia-cbp077-bmp-march2023.pdf.pdf | No | https://www.dhs.gov/sites/default/files/2023-04/privacy-pia-cbp077-bmp-march2023.pdf.pdf | |||||||||||
| Department Of Homeland Security | CBP | DHS-35 | Autonomous Surveillance Tower | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. The AI provides alerts when it detects the presence of an IoI (i.e., persons, vehicles, animals) in the image frame. With regard to persons, this computer vision application is trained to determine if the object in the image frame is a person with a certain of level of confidence and not another object that may be shaped similarly to a person. After the alert of a detection, a trained agent or user, reviews the image to identify and classify the activity taking place. The AI merely alerts to the presence of an item it was trained to detect. This is not a biometric system and does not identify or track specific individuals. | Classical/Predictive Machine Learning | CBP's limited manpower constrains its ability to manually monitor all areas of the border at all times. To account for this limitation, surveillance and sensor systems assist in monitoring the border. | AST machine learning assisted system is augmenting the U.S. Border Patrol by enhancing the capabiltiies of individual users when carrying out the domain awareness mission. The expected benefit is to have ability of single person to monitor magnitude greater area than could be done with conventional CCTV or human surveillance. The ultimate outcome for the agency and the public is greater availability of the agents to solve and address more complex tasks and allow for better strategic/tactical deployment of existing resources and personnel. | The AI provides alerts when it detects the presence of an IoI (i.e., persons, vehicles, animals) in the image frame. With regard to persons, this computer vision application is trained to determine if the object in the image frame is a person with a certain of level of confidence and not another object that may be shaped similarly to a person. After the alert of a detection, a trained agent or user, reviews the image to identify and classify the activity taking place. The AI merely alerts to the presence of an item it was trained to detect. This is not a biometric system and does not identify or track specific individuals. | 01/01/2020 | a) Purchased from a vendor | Anduril | Yes | The AI provides alerts when it detects the presence of an IoI (i.e., persons, vehicles, animals) in the image frame. With regard to persons, this computer vision application is trained to determine if the object in the image frame is a person with a certain of level of confidence and not another object that may be shaped similarly to a person. After the alert of a detection, a trained agent or user, reviews the image to identify and classify the activity taking place. The AI merely alerts to the presence of an item it was trained to detect. This is not a biometric system and does not identify or track specific individuals. | Meta-data and data created by CBP. Generally comprised of agent adjudications of autonomous sensory inputs of items of interests by the system. | No | https://cbpgov.sharepoint.com/:b:/r/sites/AutonomousSurveillanceTowerASTProgram/Shared%20Documents/General/Tech,%20Cyber,%20Eng%20docs,%20Anduril%20docs,%20Data%20sheets/AST%20CyberSec%20-%20ATOs,%20PTAs,%20ATTs/PTAs/Disposition%20PTA,%20CBP%20-%20Autonomous%20Surveillance%20Towers%20(AST),%2020230906,%20PRIV%20Final.pdf?csf=1&web=1&e=Q9OWDC | Yes | https://cbpgov.sharepoint.com/:b:/r/sites/AutonomousSurveillanceTowerASTProgram/Shared%20Documents/General/Tech,%20Cyber,%20Eng%20docs,%20Anduril%20docs,%20Data%20sheets/AST%20CyberSec%20-%20ATOs,%20PTAs,%20ATTs/PTAs/Disposition%20PTA,%20CBP%20-%20Autonomous%20Surveillance%20Towers%20(AST),%2020230906,%20PRIV%20Final.pdf?csf=1&web=1&e=Q9OWDC | |||||||||||
| Department Of Homeland Security | CBP | DHS-37 | Automated Item of Interest Detection | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. The AI runs on video and images captured from lawfully deployed technologies used to support the U.S. Border Patrol mission between Ports of Entry. The AI provides alerts when it detects the presence of an IoI, such as persons, vehicles, animals in the image frame. With regard to persons, this computer vision application is trained to determine if the object in the image frame is a person with a certain of level of confidence and not another object that may be shaped similarly to a person. After the alert of a detection, a trained agent or user, reviews the image to identify and classify the activity taking place. The AI merely alerts to the presence of an item it was trained to detect. | Classical/Predictive Machine Learning | The software is designed to analyze photographs and video feeds captured by field imaging equipment for review by U.S. Border Patrol (USBP) agents and personnel. Using proprietary software, the system processes and annotates images to identify whether they contain human subjects, animals, or vehicles. The system is designed to incorporate future enhancements that expand its detection capabilities and to improve accuracy based on user feedback. | The software analyzes images and video that are taken by operationally deployed equipment, which are then fed into CBP systems for review by USBP agents, Office of Field Operations (OFO) officers, and other CBP users. It provides quick identification of people either crossing into the U.S. at a time and place other than designated for entry, circumventing security at a port of entry, or those already inside the U.S. trying to elude capture, as well as the ability for human operators to quickly determine if subjects in an image are, in fact, human. | The system creates a layer which is overlaid over the image to produce a box around items of interest it has determined to be likely human beings. | 31/01/2020 | c) Developed with both contracting and in-house resources | Matroid | Yes | The system creates a layer which is overlaid over the image to produce a box around items of interest it has determined to be likely human beings. | All of the image data fed to the models are owned by USBP. | No | Yes | |||||||||||||
| Department Of Homeland Security | CBP | DHS-38 | Vessel Detection | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The computer vision application scans a large grid within the viewshed of the high-definition camera. It then attempts to detect small vessels detecting entering and exiting small waterways in and around the area the camera is deployed. System is calibrated to detect to identify inbound and outbound traffic. When it detects a vessel it sends and image to the cloud for user review and determination if any follow-up action is required. The AI output does not serve as a principal basis for decisions or actions related to the definition of High-Impact AI. | Computer Vision | The system uses AI-enhanced technologies and analytics to improve maritime detection and tracking in areas with significant trade and recreational water vessel activity. The system increases situational awareness and responsiveness to potential threats, assisting human operators in identifying and classifying vessels of interest. Agents can define a search area with specific criteria, which is transmitted to sensors. Detected images are analyzed by AI algorithms that filter, detect, and categorize objects into Items of Interest (IoI) or other objects. IoIs are shared across detection systems and tracked seamlessly across multiple sensors, while non-relevant objects are excluded. This approach enhances efficiency in detecting and addressing IoIs, particularly during high-traffic periods, by providing alerts and tracking information to human operators. | Current surveillance technology does not have machine assisted classification of targets on screen, making it harder for human operators to decipher legitimate from illegitimate traffic in times of high volumes of legitimate traffic. Project intends to support human operator identification and classification of potential illicit vessels. Benefits would be more efficient detection and resolution of IOIs, especially during times of high-volume traffic. | Alerts and tracks of detected Items of Interest (IOI)s to human operator workstations. | 28/07/2025 | c) Developed with both contracting and in-house resources | JHU APL | No | Alerts and tracks of detected Items of Interest (IOI)s to human operator workstations. | CBP images of open water ways for vessels. | No | Yes | |||||||||||||
| Department Of Homeland Security | CBP | DHS-401 | Vault Access Log (SPVAA) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case uses facial recognition technology in Seized Property Vault Activity Automation (SPVAA) to create a log of access to a seized property vault. Photos of the CBP personnel accessing the vault are loaded into the application and logs the entrance request, the case number associated with the entrance request, and the individual’s access to the vault. | Computer Vision | Automate identification of personnel entering into secure seized property vault. | The system will enhance monitoring and minimizes the risk of unauthorized access, contributing to stronger security protocols for handling seized property. | Leverages DHS facial matching technologies to provide a match or no match response. | 01/08/2022 | c) Developed with both contracting and in-house resources | NEC | Yes | Leverages DHS facial matching technologies to provide a match or no match response. | Border Crossing Information | Yes | https://www.dhs.gov/collections/privacy-impact-assessments-pia | Yes | https://www.dhs.gov/collections/privacy-impact-assessments-pia | |||||||||||
| Department Of Homeland Security | CBP | DHS-65 | Aircraft Landing Location Predictor (KESTREL) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI predicts aircraft landing locations. The AI outputs do not produce an action or serve as a principal basis for a decision that has the potential to significantly impact the safety of human-life or well-being, climate or environment, critical infrastructure, or strategic assets or resources. The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. | Classical/Predictive Machine Learning | Monitoring activities in the air and maritime domains to identify unusual patterns or behaviors. | The system analyzes potential landing locations for aircraft to support response planning and preparation. | The output provides a visual representation of the top three potential locations, using color-coded indicators to show the likelihood of each outcome. | 01/10/2022 | a) Purchased from a vendor | Maxar | Yes | The output provides a visual representation of the top three potential locations, using color-coded indicators to show the likelihood of each outcome. | Data is provided from a surveillance system in the form of real-time messages on detected tracks within the system. | Yes | Yes | |||||||||||||
| Department Of Homeland Security | CBP | DHS-81 | Passport Anomaly Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. The AI output is an assessment of passport validity in response to an officer's request for passport validation. This result could be related to an inconsistency or abnormality in a passport's pattern. This is a tool available to CBP officers for confirming the validity of a passport. This result is used to notify the CBP officer that a passport may require review, as it may be part of a newly released sequence, may be invalid, or even possibly fraudulent. This is only one piece of information provided to CBP Officers during the normal course of their duties. The officers would use any results provided to research the validity of the passport through other sources. | Classical/Predictive Machine Learning | The Passport Anomaly Model addresses challenges stemming from the lack of formal notification regarding updates to passport series, such as issuance of new series or expiration of old ones. By analyzing historical trends, the model evaluates whether a passport exhibits typical or atypical characteristics and alerts officers when further scrutiny may be warranted. This capability enhances the integrity of travel document verification by enabling CBP personnel to conduct thorough and efficient reviews, ensuring security and accuracy in the inspection process. | The model assists CBP personnel in passenger targeting and vetting by analyzing anomalies in Electronic System for Travel Authorization (ESTA) and non-ESTA country-specific traveler passports to improve the accuracy of matching and streamlining the screening process by reducing errors and enhancing the identification of high-risk travelers. | The model’s outputs are integrated into the Advanced Targeting System (ATS) application, delivering real-time results that assist CBP personnel in detecting passport anomalies and potentially fraudulent documents. | 01/07/2017 | b) Developed in-house | ManTech | Yes | The model’s outputs are integrated into the Advanced Targeting System (ATS) application, delivering real-time results that assist CBP personnel in detecting passport anomalies and potentially fraudulent documents. | This model leverages data provided by air carriers within the Advance Passenger Information System (APIS) and Electronic System for Travel Authorization (ESTA). | Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | Age | Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | ||||||||||
| Department Of Homeland Security | CBP | DHS-86 | Agriculture Commodity Model (AGC) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not produce an action or serve as a principal basis for a decision that has the potential to significantly impact the safety of human-life or well-being, climate or environment, critical infrastructure, or strategic assets or resources. The use case identifies agricultural pest risk in cargo shipments entering the United States. If a cargo shipment is identified as high-risk for pest infestation, it is prioritized for inspection. The model does not track or identify individuals. | Classical/Predictive Machine Learning | AI/ML Model aids CBP personnel in accurately and efficiently identifying cargo shipments at risk for agricultural pests in compliance with APTL's agricultural monitoring program. Due to the excessive volume and velocity of inbound cargo shipments, CBP personnel cannot possibly evaluate every shipment for risk. AI/ML models assist in performing a greater depth of risk assessment across all inbound cargo shipments to identify the most likely ones that would require additional attention, analysis and possible examination. | The AGC Model uses data analytics and risk indicators to prioritize inspections and allocate resources effectively. This proactive approach helps protect the U.S. food supply and agricultural economy while facilitating legitimate trade. | CBP agriculture specialists use the AGC Model's risk assessment outputs, such as risk scores to prioritize further screening of cargo shipments. | 01/07/2022 | b) Developed in-house | Yes | CBP agriculture specialists use the AGC Model's risk assessment outputs, such as risk scores to prioritize further screening of cargo shipments. | This model leverages data provided by carriers within the Automated Commercial Environment (ACE), as well as transformations of that data within the Automated Targeting System (ATS). | Yes | Yes | ||||||||||||||
| Department Of Homeland Security | CBP | DHS-95 | Trade Entity Risk Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The Trade Entity Risk model/tool enhances cargo predictive threat models by providing a comprehensive risk profile that aggregates historical trade entity transactions, trading partner relationships, reviews, examinations, and violations (within CBP data holdings) to create quantifiable risk measures for all trade entities. The AI model serves as an input to larger AI/ML cargo risk targeting models to better assess cargo threats and inform focus areas for trade targeting. Its outputs are not directly shared with users or operators and the output does not serve as a principal basis for decision or actions. | Classical/Predictive Machine Learning | The need to continuously assess and identify trade entity risk to help better assess cargo threats. | The Trade Entity Risk model enhances existing predictive threat models by compiling a risk profile that includes historical transaction data, relationships with trading partners, and relevant compliance information. This aggregated data helps create measurable risk indicators for trade entities. | The calculated risk measures produced by the Trade Entity Risk model can be integrated into broader AI and machine learning systems to improve the evaluation of cargo-related threats. This output supports the standardization of trade entity risk, facilitating better data development for future predictive models. | 15/07/2025 | b) Developed in-house | ManTech | Yes | The calculated risk measures produced by the Trade Entity Risk model can be integrated into broader AI and machine learning systems to improve the evaluation of cargo-related threats. This output supports the standardization of trade entity risk, facilitating better data development for future predictive models. | This model leverages data provided by carriers within the Automated Commercial Environment (ACE), as well as transformations of that data within the Automated Targeting System (ATS). | Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | |||||||||||
| Department Of Homeland Security | ICE | DHS-125 | Investigative Prioritization Aggregator | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case provides more efficient data processing for HSI personnel. The AI output may be used to produce investigative insights in the form of data, information leads or connections that HSI personnel can use to inform investigations, but the output itself is data preparation and organization so HSI personnel can produce those leads when combining the AI output with the personnel’s expertise and other relevant investigative data and information. Personnel may use these insights for law enforcement purposes in ongoing investigations with existing targets to assist in activities such as producing risk assessments about individuals or identifying criminal suspects; however, all insights are reviewed and validated by both personnel and supervisors before being included in any official case management system, and do not serve as a principal basis for law enforcement action or decision. Any enforcement decisions related to these insights are outcomes of the full Federal Investigation Process involving verifying any insights as evidence (including validating AI-translated material by a certified interpreter), presentation to a U.S. Attorney’s Office and potentially a District Court judge, decision to prosecute, judicial review, and trial and sentencing. | Classical/Predictive Machine Learning | This use case intends to solve the problem of overwhelming data volumes that make it difficult for HSI personnel to prioritize high-value targets in criminal investigations. | The sheer volume of data associated with investigations often overwhelms human capabilities, making it challenging for HSI personnel to analyze evidence and identify key players in criminal networks. Currently, there is no effective mechanism to quantify the level of evidence related to a particular subject or entity, or to determine which actors within a network are the most influential. This is particularly critical in the context of the counter-opioid/fentanyl mission, where timely and accurate intelligence is essential. To address this challenge, this project utilizes machine learning to assign point values to data, enabling the scoring of information associated with a given selector, such as a phone number or legal name. This scoring system helps to understand the importance of an entity to investigations and the potential consequences of removing or neutralizing that entity. By doing so, HSI personnel can focus on high-priority targets and associated criminal networks, ultimately enhancing their ability to disrupt and dismantle these threats. | The output is scored entity data (such as a phone number or legal name). This scoring system helps to understand the importance of an entity to investigations and the potential consequences of removing or neutralizing that entity. | 01/02/2024 | c) Developed with both contracting and in-house resources | Sandia National Laboratories | Yes | The output is scored entity data (such as a phone number or legal name). This scoring system helps to understand the importance of an entity to investigations and the potential consequences of removing or neutralizing that entity. | Law Enforcement Sensitive (LES) investigative data. | Yes | https://www.dhs.gov/sites/default/files/2025-06/25_0618_priv_pia-ice-055-raven-appendix-update.pdf | Yes | https://www.dhs.gov/sites/default/files/2025-06/25_0618_priv_pia-ice-055-raven-appendix-update.pdf | |||||||||||
| Department Of Homeland Security | ICE | DHS-2427 | Translation and Transcription for Investigative Data | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case provides more efficient data processing for HSI personnel. The AI output may be used to produce investigative insights in the form of data, information leads or connections that HSI personnel can use to inform investigations, but the output itself is data preparation and organization so HSI personnel can produce those leads when combining the AI output with the personnel’s expertise and other relevant investigative data and information. Personnel may use these insights for law enforcement purposes in ongoing investigations with existing targets to assist in activities such as producing risk assessments about individuals or identifying criminal suspects; however, all insights are reviewed and validated by both personnel and supervisors before being included in any official case management system, and do not serve as a principal basis for law enforcement action or decision. Any enforcement decisions related to these insights are outcomes of the full Federal Investigation Process involving verifying any insights as evidence (including validating AI-translated material by a certified interpreter), presentation to a U.S. Attorney’s Office and potentially a District Court judge, decision to prosecute, judicial review, and trial and sentencing. | Natural Language Processing (NLP) | This use case intends to solve the problem of the time-consuming process of translating and transcribing data for investigative purposes. | HSI investigators often encounter data from various sources, including legal and administrative processes, enforcement actions, and open-source materials, in languages other than English. To unlock the value of this data, it must be translated into English before further analysis can be conducted. The Translation and Transcription Service leverages neural machine translation (NMT) models for text translation and automatic speech recognition (ASR) and deep neural network (DNN) models with normalization for voice-to-text transcription. This innovative approach enables users to quickly triage large datasets and identify key information relevant to investigations. Any data deemed critical for court proceedings is then submitted to certified human translators for final review, ensuring that government resources are allocated efficiently and only used for necessary translations and transcriptions. | The Translation and Transcription Service leverages neural machine translation (NMT) models for text translation and automatic speech recognition (ASR) and deep neural network (DNN) models with normalization for voice-to-text transcription. | 01/02/2024 | c) Developed with both contracting and in-house resources | Booz Allen | Yes | The Translation and Transcription Service leverages neural machine translation (NMT) models for text translation and automatic speech recognition (ASR) and deep neural network (DNN) models with normalization for voice-to-text transcription. | AI models within the use case are not trained on agency data. Open-source models (i) Whisper is trained on 680K hours of multilingual and multitask supervised data collected from the web, (ii) No Language Left Behind (NLLB) is trained on a combination of publicly available datasets (additional information available in Section 5 of Meta’s NLLB whitepaper: https://research.facebook.com/file/585831413174038/No-Language-Left-Behind--Scaling-Human-Centered-Machine-Translation.pdf). | Yes | https://www.dhs.gov/sites/default/files/2025-06/25_0618_priv_pia-ice-055-raven-appendix-update.pdf | Yes | https://www.dhs.gov/sites/default/files/2025-06/25_0618_priv_pia-ice-055-raven-appendix-update.pdf | |||||||||||
| Department Of Homeland Security | ICE | DHS-2511 | AI-Assisted Audio/Video Redaction | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI-Assisted Audio/Video Redaction use case does not meet the definition of a High-Impact category under OMB M-25-21 due to its narrowly defined scope, reliance on human oversight, and non-decision-making role. This tool is designed to assist Homeland Security Investigations (HSI) by partially automating the redaction of audio and video evidence, such as detecting and obscuring faces, objects, or sensitive information (e.g., license plates, PII), to protect individuals’ identities during investigations and legal proceedings. Importantly, the tool does not perform 1:1 facial matching or identification, and its outputs are strictly limited to redactions, which are subject to comprehensive human review and editing before finalization. This ensures that the AI’s role is supportive rather than determinative, with no direct impact on investigative decisions or legal outcomes. | Computer Vision | This use case intends to solve the problem of the labor-intensive process of redacting audio and video evidence. | The AI-Assisted Audio/Video Redaction tool is used to reduce the manual effort required to redact audio and video evidence used during an investigation and subsequent legal proceedings. | The AI output in this use case are redactions to the media file. Users will further edit the redacted media prior to exporting the final redacted file to ensure completeness. Homeland Security Investigations conducts a human review of each frame within redacted files prior to distribution. | 01/07/2024 | a) Purchased from a vendor | Case Guard | No | The AI output in this use case are redactions to the media file. Users will further edit the redacted media prior to exporting the final redacted file to ensure completeness. Homeland Security Investigations conducts a human review of each frame within redacted files prior to distribution. | All training sets used for the model are from public and private collections of images. AI models are trained using a combination of real-world and synthetic datasets collected from publicly available sources. These datasets are curated to represent a broad range of conditions, including edge cases such as occlusions and poor lighting, to improve detection accuracy across varied scenarios. | Yes | https://www.dhs.gov/sites/default/files/2024-03/24_0307_priv_pia-ice-066a-pia-update.pdf | No | https://www.dhs.gov/sites/default/files/2024-03/24_0307_priv_pia-ice-066a-pia-update.pdf | |||||||||||
| Department Of Homeland Security | ICE | DHS-2517 | Dark Web Threat Intelligence for Cyber Investigations | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case provides a more efficient way to process, review, summarize, translate, and analyze dark web data for use in HSI investigations. The AI output may be used to produce investigative insights, but the output itself is supporting information for HSI personnel to use for ease of review, for further analysis, and to produce leads. Insights derived from translated data allow investigators to identify the most relevant data that, if deemed critical for court proceedings, can be submitted to certified human translators for final review. Personnel may use investigative insights for law enforcement actions such as identifying criminal suspects; however, all insights are reviewed and validated by both personnel and supervisors before being included in any official case management system, and do not serve as a principal basis for any law enforcement action or decision. Any enforcement decisions related to these insights are outcomes of the full Federal Investigation Process involving verifying any insights as evidence (including validating AI-translated material by a certified interpreter), presentation to a U.S. Attorney’s Office and potentially a District Court judge, decision to prosecute, judicial review, and trial and sentencing. | Generative AI | This use case intends to solve the problem of quickly identifying and summarizing relevant cyber threat data from the dark web, which can be difficult and time-consuming for analysts. | The summarized information helps analysts quickly identify threat actors, trends, and illicit platforms, enabling them to prioritize their investigative efforts. The data analysis and extraction techniques connect related information across data holdings and generate metadata to help analysts review and search results. The translation capability helps analysts identify non-English data responsive to an investigation and saves time otherwise spent translating non-responsive data. | The system’s AI outputs are concise summaries of search results, English translations of non‑English data, and metadata that highlights potential connections and leads. These outputs assist analysts in quickly identifying key findings while retaining access to the original data for deeper analysis and verification. All outputs are part of a broader investigative process and are not used as the sole basis for enforcement actions. | 01/08/2024 | a) Purchased from a vendor | Law Enforcement Sensitive (LES) | No | The system’s AI outputs are concise summaries of search results, English translations of non‑English data, and metadata that highlights potential connections and leads. These outputs assist analysts in quickly identifying key findings while retaining access to the original data for deeper analysis and verification. All outputs are part of a broader investigative process and are not used as the sole basis for enforcement actions. | Yes | No | ||||||||||||||
| Department Of Homeland Security | ICE | DHS-2547 | Entity Resolution for Global Trade Data | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Although the use case falls into a presumed high-impact category related to law enforcement investigations, it does not meet the high-impact definition as its outputs do not directly serve as a principal basis for enforcement actions or regulatory decisions. Instead, the AI-generated knowledge graph provides a foundation for further human-led investigation, requiring source validation through separate processes. This delineation ensures that the AI output remains a supportive rather than a determinative factor, disqualifying it from the high-impact AI system category. | Classical/Predictive Machine Learning | The AI is intended to solve the problem of investigators having to manually piece together fragmented global trade and supply chain data from many sources, which makes it difficult to see relationships among entities and identify potential leads in transnational criminal investigations. | This platform improves Homeland Security Investigations' ability to validate existing information, understand complex supply chain networks, and to generate leads in transnational criminal investigations. | The platform uses AI Machine Learning (ML) models for data collection, data structuring, entity resolution, network analysis, and risk assessment. These ML processes contribute to the platform’s output, a dynamic knowledge graph and user-friendly interface for global supply chain research. | 10/10/2024 | a) Purchased from a vendor | Altana | No | The platform uses AI Machine Learning (ML) models for data collection, data structuring, entity resolution, network analysis, and risk assessment. These ML processes contribute to the platform’s output, a dynamic knowledge graph and user-friendly interface for global supply chain research. | The platform’s machine learning models were trained and evaluated by the vendor using its own datasets, which are derived from public and commercially sourced trade and logistics records (such as customs declarations, bills of lading, and shipment data from air, rail, and sea carriers). Homeland Security Investigations (HSI) does not provided any ICE or HSI investigatory data to the vendor to develop, training, test, or operate the platform models. | No | No | |||||||||||||
| Department Of Homeland Security | ICE | DHS-2575 | Blockchain Analysis | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Despite falling into a presumed category of high-impact AI, this use case does not meet the definition because the AI’s outputs serve primarily as inputs to an investigative process rather than making legally binding or material decisions itself. HSI investigators review, validate, and contextualize the AI-generated outputs before integrating them into any official case management system. Enforcement decisions and outcomes arise from a full investigative process that includes judicial review and other safeguards, thereby distancing the AI outputs from direct impact on civil liberties or privacy | Classical/Predictive Machine Learning | By utilizing this AI-powered blockchain analysis platform, investigators can uncover hidden connections across blockchain networks, detect illicit activities, and significantly reduce the time required for manual analysis, enhancing HSI’s ability to combat transnational crime effectively. | The use of AI within TRM Labs improves HSI’s ability to uncover hidden connections across blockchain ecosystems, detect illicit behaviors, and reduce the time required for manual analysis. | The platform’s outputs include confidence scores for address attributions, risk flags based on behavioral typologies, identification of hidden connections across blockchain ecosystems, and plain-language summaries of smart contracts. | 16/08/2022 | a) Purchased from a vendor | TRM Labs | No | The platform’s outputs include confidence scores for address attributions, risk flags based on behavioral typologies, identification of hidden connections across blockchain ecosystems, and plain-language summaries of smart contracts. | The platform uses vendor AI models trained and tested on public blockchain ledger data (public/external), as well as proprietary data, internal attribution and scoring data, behavioral data, network data, and synthetic data. | No | No | |||||||||||||
| Department Of Homeland Security | ICE | DHS-2578 | Enhanced Lead Identification and Targeting | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | While ELITE provides actionable data to ERO officers, its outputs are limited to normalized address data and do not serve as a principal basis for decisions or actions with legal, material, binding, or significant effects on individuals. ERO officers review and validate the AI-driven outputs before determining actions, ensuring human oversight and additional verification steps. Furthermore, enforcement decisions are based on the full investigative process, which includes human analysis and validation of the source of AI outputs. As such, the AI system's role is limited to data extraction and normalization, rather than serving as a primary basis for enforcement actions. | Generative AI | The AI is intended to solve the problem of unstructured, hard‑to‑read address information in records like rap sheets and warrants, which makes it difficult and time‑consuming for Enforcement and Removal Operations officers to extract accurate addresses and build usable enforcement leads. | The integration of AI enhances data extraction capabilities and decreases the time spent on manual data normalization tasks. This provides Enforcement and Removal Operations officers with higher-quality leads and enables them to make better-informed decisions. | The outputs of Enhanced Leads Identification & Targeting for Enforcement (ELITE) are enriched leads that include AI-extracted addresses. Enforcement and Removal Operations officers review these leads to determine which are actionable and then share actionable leads across offices and areas of responsibility to coordinate enforcement operations. | 07/06/2025 | a) Purchased from a vendor | Palantir | Yes | The outputs of Enhanced Leads Identification & Targeting for Enforcement (ELITE) are enriched leads that include AI-extracted addresses. Enforcement and Removal Operations officers review these leads to determine which are actionable and then share actionable leads across offices and areas of responsibility to coordinate enforcement operations. | The system uses commercially available large language models trained on the public domain data by their providers. The use of LLMs is limited to address extraction from criminal records such as rap sheets and warrants. ICE data was not used during the design, development, or training phases of the AI models. During operation, the AI models interact with ICE production data from multiple sources, including data from ICE’s Enforcement Integrated Database (EID). | Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-eid-may2019.pdf | Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-eid-may2019.pdf | |||||||||||
| Department Of Homeland Security | ICE | DHS-407 | Biometric Check-in for ATD-ISAP (SmartLINK) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Facial verification is only one option for check-in. If the remote check-in fails, either because it was unable to verify the match between the user and their previously taken photo, or because of other potential issues (poor lighting, camera/phone malfunction, etc...), an officer will manually review the check-in photo against the previously taken photos. If that fails, the user can schedule an in-person check-in at their local ERO office. Therefore, the output of AI (facial verification for a remote check-in) is not the primary basis for a decision or action that would affect the individual's rights or safety. It is a convenience to help save time for both the user and officers. | Computer Vision | This use case intends to solve the problem of the need for frequent in-person check-ins for participants in the ATD-ISAP program. | ISAP Biometric Monitoring App is a technology option that allows participants to report in using a smartphone. This app verifies a participant’s identity, determine their location, and quickly collect status change information. The app adds functionality not available with telephonic and is less intrusive than a GPS unit. ISAP monitoring app limits in-person interactions of routine check-ins, allowing more time to be allocated to non-compliant participants, complex removal proceedings cases and docket management. | There are two outputs related to using ISAP Biometric Monitoring App. Either a participant “passes” (biometric match) or the photo is moved to a “pending review” status. In either scenario, a human can evaluate the response. | 01/02/2018 | a) Purchased from a vendor | BI | Yes | There are two outputs related to using ISAP Biometric Monitoring App. Either a participant “passes” (biometric match) or the photo is moved to a “pending review” status. In either scenario, a human can evaluate the response. | The training process includes datasets from diverse facial images and real-world environments. The images are preprocessed to normalize variables like lighting and facial expressions, making them suitable for facial matching. Data techniques, such as rotation and scaling, are also applied to alleviate the need for additional data collection. The models are trained to extract facial features and to match them accurately. | Yes | https://www.dhs.gov/sites/default/files/2023-08/privacy-pia-ice062-atd-august2023.pdf | No | https://www.dhs.gov/sites/default/files/2023-08/privacy-pia-ice062-atd-august2023.pdf | |||||||||||
| Department Of Homeland Security | MGMT | DHS-2434 | User and Entity Behavior Analytics (UEBA) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The CVAS sub-system inside the VIEW system of record is not a high-impact use case because the Al's output from CVAS does not actually "serve as a principal basis for" the relevant type of agency action or decision. The collated information supports decisions about staffing priorities but is not used to make security-related or employment-related decisions. The use case also does not perform workplace monitoring or surveillance. | Classical/Predictive Machine Learning | The Continuous Vetting Analytics Service (CVAS) sub-system in the Vetting Identities for an Enterprise Workforce (VIEW) system of record will solely be used to aggregate information provided by or authorized to be collected by an DHS applicant and then presents the aggregated information in a structured manner and present it to personnel security adjudicative specialist to help them prioritize workload and improve review processes. | Enhanced Threat Detection: Identify patterns in user behaviors that deviate from normal baselines, signaling potential insider threats or security risks. Continuous Risk Assessment: Move from static vetting to a continuous vetting process by monitoring real-time activities and interactions with secure information. Improved Incident Response: Enable rapid responses to high-risk behaviors, escalating alerts for timely intervention by security personnel." | Behavioral Baseline Modeling: Develop a baseline for each individual based on regular access patterns, network usage, and interactions with classified data or secure areas. Anomaly Detection: Employ machine learning models to detect deviations from established baselines, such as unusual access times, atypical access to high-sensitivity resources, or excessive data downloads. Risk Scoring: Assign a risk score to each user based on observed anomalies, factoring in historical behavior, job role, and access level, allowing security teams to prioritize investigations. Automated Alerts & Reporting: Generate automated alerts for high-risk behaviors or patterns of concern and deliver timely reports to personnel security teams for further investigation." | 08/07/2023 | a) Purchased from a vendor | CANDA Solutions | Yes | Behavioral Baseline Modeling: Develop a baseline for each individual based on regular access patterns, network usage, and interactions with classified data or secure areas. Anomaly Detection: Employ machine learning models to detect deviations from established baselines, such as unusual access times, atypical access to high-sensitivity resources, or excessive data downloads. Risk Scoring: Assign a risk score to each user based on observed anomalies, factoring in historical behavior, job role, and access level, allowing security teams to prioritize investigations. Automated Alerts & Reporting: Generate automated alerts for high-risk behaviors or patterns of concern and deliver timely reports to personnel security teams for further investigation." | Personally Identifiable Information (PII), Sensitive Sensitive Personally Identifiable Information (SPII), Clearance and Background Investigation Data: Data from security clearances, background investigations, and adjudication records. | Yes | Yes | |||||||||||||
| Department Of Homeland Security | USCIS | DHS-14 | Biometrics Enrollment Tool (BET) Fingerprint Quality Check | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. This tool simply identifies whether a fingerprint collected is of sufficient quality to pass the FBI fingerprint check process, ultimately maximizing the number of successful FBI submissions while minimizing the number of fingerprint recaptures necessary. This quality assurance step is one task in a series of adjudication activities but is not determinative of the overall adjudication decision. The tool saves personnel time and resources while enhancing customer experience by helping to ensure that only quality fingerprints are passed forward for matching against the FBI Identity History Summary Check. Results of FBI fingerprint checks are subsequently reviewed by a human as part of the immigration adjudicative process. | Classical/Predictive Machine Learning | This effort aims to maximize the number of successful FBI submissions while minimizing the number of fingerprint recaptures necessary. The output is a Numerical Fingerprint Quality score, which is compared against fingerprint quality thresholds (per finger and per set of fingerprints) to align with FBI specifications. | BET assists in determining if the fingerprint taken is good enough quality to pass the FBI fingerprint check process. It provides immediate feedback when a set of prints is likely to be rejected by the FBI by incorporating machine learning models into the BET application. The FBI will not disclose their quality grading criteria for fingerprints, leaving BET with the responsibility of determining quality to prevent unnecessary secondary encounters with applicants. | Numerical Fingerprint Quality score, which is compared against fingerprint quality thresholds (per finger and per set of fingerprints) to align with FBI specifications | 01/01/2024 | c) Developed with both contracting and in-house resources | Pluribus Digital | Yes | Numerical Fingerprint Quality score, which is compared against fingerprint quality thresholds (per finger and per set of fingerprints) to align with FBI specifications | Internal data from BET data capture into Databricks lakehouse, numerical values representing fingerprint quality scores determined by the BET system outside of the AI workflow. | Yes | https://www.dhs.gov/sites/default/files/2024-11/24_0930_priv_pia-dhs-uscis-cpms-060d.pdf | Yes | https://www.dhs.gov/sites/default/files/2024-11/24_0930_priv_pia-dhs-uscis-cpms-060d.pdf | |||||||||||
| Department Of Homeland Security | USCIS | DHS-180 | Automated Name and Date of Birth (DOB) Harvesting Service | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The use case improves case processing efficiency by reducing the amount of time USCIS staff must spend to manually find aliases and dates of birth (DOBs) in existing records of an individual. The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. The use case increases efficiency of tasks associated with accurate and timely identification, analysis, and review of biographical information needed for adjudication. The AI outputs are suggested aliases and DOBs related to the individual query, which USCIS staff must review to accept, reject, or ignore the suggested information. The AI outputs reduce the amount of adjudicative time spent manually harvesting aliases and DOBs. The use case increases efficiency of tasks associated with reviewing existing records for adjudicating requests for immigration benefits. Completing such adjudications are not dependent on the use case however lack of this tool would significantly increase human processing times and potentially reduce the accuracy of information consulted during the human review process. | Classical/Predictive Machine Learning | Adjudicators spend significant amount of time manually harvesting aliases and dates of birth (DOBs) from identity history summary (IdHS) report attached to the ELIS case as part of the Manual Name Harvesting Task during case processing. | To reduce the amount of adjudicative time spent manually harvesting aliases and dates of birth (DOBs) from identity history summary (IdHS) report attached to the ELIS case as part of the Manual Name Harvesting Task during case processing. | Suggested Names and DOBs from IdHS record. | 27/06/2022 | c) Developed with both contracting and in-house resources | SAIC and DV United | Yes | Suggested Names and DOBs from IdHS record. | Training and evaluation for ANH was performed using a large set of previously annotated IdHS records (raw text) in a secure environment separate from our standard development environment. The system uses Spark NLP and distilbert embeddings for the model input, so no raw text from these files is stored or accessible from the final model or associated logged artifacts. Annotations were sourced from results of previously completed manual name harvesting tasks. | Yes | https://www.dhs.gov/publication/dhsuscispia-056-uscis-electronic-immigration-system-uscis-elis | Yes | https://www.dhs.gov/publication/dhsuscispia-056-uscis-electronic-immigration-system-uscis-elis | |||||||||||
| Department Of Homeland Security | USCIS | DHS-189 | ELIS Card Photo Validation Service | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case supports beneficiaries submitting an e-filed I-765 via myUSCIS to apply for employment authorization. These applications include a digital ID photo of the applicant, which will be printed on their Employment Authorization Document (EAD) card if the application is accepted. The use case determines whether a user-uploaded ID photo is suitable for use on an EAD card, and notifies the submitter if it detects a potential quality issue with the photo. (SEE DHS CAIO SUPER MEMO FY24) --- The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. The AI outputs are a real-time advisory message to the submitter of the photo that it is of insufficient quality for card production at which point the submitter may choose to resubmit or not. While adjudication of EADs is not dependent on this use case, this tool enhances efficiency and customer service by minimizing the number of cards failing in production and delays related to subsequent requests for information. (SEE USCIS REQUEST FOR DHS CAIO DETERMINATION FY24) | Computer Vision | The card photo validation solution was developed to validate each submitted photo against a set of business-defined requirements in near-real-time in order to eliminate rejections/RFEs for the user uploaded photos. | Ensuring beneficiary uploaded photos meet USCIS requirements. This helps ensure photos are correct before making ID cards, saving adjudicator time and avoiding delays. | Response back to user based on the pre-defined quality checks if the uploaded photo meets USCIS requirements. Users still have the option to ignore the warnings and upload the photo. | 15/03/2022 | c) Developed with both contracting and in-house resources | SAIC and DV United | Yes | Response back to user based on the pre-defined quality checks if the uploaded photo meets USCIS requirements. Users still have the option to ignore the warnings and upload the photo. | Initial face detection is performed by pretrained dlib embeddings as implemented in opencv. Validation tests requiring object detection (headwear and eyeglasses) is performed using custom fine-tuning from Detectron2, an open-source object detection model. Training and testing data for these object detection problems was sourced from a combination of public domain face detection datasets and USCIS production data passport photos. | Yes | https://www.dhs.gov/publication/dhsuscispia-056-uscis-electronic-immigration-system-uscis-elis | Yes | https://www.dhs.gov/publication/dhsuscispia-056-uscis-electronic-immigration-system-uscis-elis | |||||||||||
| Department Of Homeland Security | USCIS | DHS-2543 | AI Security and Monitoring | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case collates information provided by or authorized to be collected by an employee as part of network security process. The collated information supports decisions about staffing priorities but is not used to make security-related or employment-related decisions. | Generative AI | Organizations adopting AI face a number of risks from data leaks, shadow AI, and unsecured outputs. | The gateway will secure enterprise AI use by discovering shadow AI, enforcing guardrails, and preventing data spills. Its benefits include improved compliance, reduced risk, real time protection, and enhanced visibility. Organizations are empowered to securely and efficiently adopt AI. | The AI System outputs risk alerts, compliance reports, and audit logs. It predicts threats, recommends policy adjustment, and semi-autonomously enforces guardrails. It secures data and AI usage via blocking, masking, and context-aware decisions. | 31/03/2025 | a) Purchased from a vendor | Lasso | Yes | The AI System outputs risk alerts, compliance reports, and audit logs. It predicts threats, recommends policy adjustment, and semi-autonomously enforces guardrails. It secures data and AI usage via blocking, masking, and context-aware decisions. | Test data formatted in a fashion to simulate PII/SPII/etc to ensure that the solution properly detects. | Yes | Yes | |||||||||||||
| Department Of Homeland Security | USCIS | DHS-372 | User Entity and Behavior Analytics (UEBA) for Security Operations (SecOps) Anomaly Identification | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case collates information provided by or authorized to be collected by an employee as part of network security process. The collated information supports decisions about staffing priorities but is not used to make security-related or employment-related decisions. | Classical/Predictive Machine Learning | User Entity and Behavior Analytics (UEBA) assists USCIS Security Operations (SecOps) in identifying behavioral anomalies that most likely indicate malicious intent or heightened risk associated with user identities and endpoint hosts accessing the USCIS network. The analytics provide risk scoring, which helps USCIS SecOps to prioritize highest risk incidents first. | UEBA's purpose is to review USCIS system logs to determine when an entity is performing actions that are anomalous. An entity can be classified as a workstation, server or an internal USCIS system account. The UEBA ingests logs from systems to perform analytics based off of models that are manually created and maintained. UEBA uses the models to apply a risk score to the entity which the risk score is then used to create a case (or ticket) for Security Operations analyst review. The AI reviews the action of the analyst to adjust the risk scoring for future events. Output would assist in prioritizing cyber events for further manual investigation. | Output of the Machine Learning is an alert with all artifacts for the SOC to investigate. The alert is used as a recommendation to prioritize specific investigations in the SOC ticket queue. | 17/08/2024 | c) Developed with both contracting and in-house resources | Gurucul | Yes | Output of the Machine Learning is an alert with all artifacts for the SOC to investigate. The alert is used as a recommendation to prioritize specific investigations in the SOC ticket queue. | Data used to tune models is USCIS internal system logs. | No | Yes | |||||||||||||
| Department Of Homeland Security | USCIS | DHS-56 | Person-Centric Identity Services Information Compilation Check | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case verifies the ongoing accuracy of information compiled from within the Person-Centric Identity Services (PCIS). It identifies which records from within PCIS best match search criteria to support case processing. ------ The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. The use case compiles records from across a variety of USCIS systems to provide a comprehensive history of a person’s interaction with USCIS. The output of this can be visualized through a report or dashboard to assist with case review ensuring access to helpful and accurate records. Adjudicators review the outputs of this use case, alongside other information and insights, to process a case and make a final determination. The adjudication process can be conducted without this tool, however, doing so would significantly increase the time and effort required to process immigration requests. | Classical/Predictive Machine Learning | The output of the use case is the numerical confidence score which is used to determine the validity of the A-number presented in search results. The confidence score identifies which records from within PCIS best match search criteria for an A-number. | The aim of this use case is to leverage machine learning to test the accuracy of PCIS to identify and manage associations between individuals and their assigned A-numbers, which is a unique 7, 8, or 9 digit number assigned to a noncitizen by DHS. The A-Number plays a critical role in surfacing of a person and all their associated records from across PCIS. | Numerical likelihood score which is used to determine the validity of the A# presented. Likelihood scores are subjected to a high threshold (.98, maximum 1) to assess whether the A# presented belongs to the individual. | 01/07/2022 | c) Developed with both contracting and in-house resources | MetroIBR | Yes | Numerical likelihood score which is used to determine the validity of the A# presented. Likelihood scores are subjected to a high threshold (.98, maximum 1) to assess whether the A# presented belongs to the individual. | USCIS-only data derived from 7 form-processing source systems including C3, ELIS, CPMS, GLOBAL, CIS2, AR-11, CAMINO. | Yes | https://www.dhs.gov/sites/default/files/2022-12/privacy-pia-uscis-pia087-pcis-december2022.pdf | Yes | https://www.dhs.gov/sites/default/files/2022-12/privacy-pia-uscis-pia087-pcis-december2022.pdf | |||||||||||
| Department Of Homeland Security | CBP | DHS-2366 | CBP Careers Bot - Leo | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The Natural Language Processing (NLP) chatbot is intended to help visitors to the CBP careers site navigate complex and extensive career resources quickly and easily. By providing a guided, interactive experience, the chatbot simplifies access to the most relevant career information, on a user-by-user basis. Additionally, the chatbot can direct users to CBP recruiters and recruitment events, strengthening the agency's recruitment network and fostering more direct engagement with prospective candidates. | Visitors to the U.S. Customs and Border Protection (CBP) careers website can engage with a Natural Language Processing (NLP) based chat bot to help access CBP career related information and drive users to take the next action such as contacting a recruiter, attending a career event, or applying for a CBP career to help access CBP career related information and drive users to take the next action. These data-driven responses will allow for more natural, conversational interactions to increase usability and accuracy in provided information. | The NLP-chatbot will provide natural language responses to user queries. | 30/09/2025 | c) Developed with both contracting and in-house resources | Salesforce Einstein | Yes | The NLP-chatbot will provide natural language responses to user queries. | User input is categorized and captured in Salesforce to refine the chatbot's interpretation of future inputs. | No | No | ||||||||||||||
| Department Of Homeland Security | CBP | DHS-2373 | CBP Employee Experience | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Parsing through voluminous amounts of qualitative and quantitative data from recruit/applicant/employee survey data. The technology provides actionable intelligence for senior leaders to better improve the recruit/applicant/employee experience, thereby increasing both yield rates and resiliency. | CBP Employee Experience is intended to ingest, interpret, and operationalize employee experience data originating from survey results and operational data to deliver real time insights related to the experience of USBP recruits, applicants, and employees. These metrics inform HRM leadership of opportunities for process improvement in order to meet congressionally mandated hiring targets and retain a qualified workforce. | Real time insights related to the experience of USBP recruits, applicants, and employees. | 01/10/2023 | c) Developed with both contracting and in-house resources | Medallia | Yes | Real time insights related to the experience of USBP recruits, applicants, and employees. | The platform uses a supervised machine learning model trained on baseline non-governmental data, which is regularly updated and tested for accuracy. CBP can further train the model by correcting sentiment tags, allowing the system to learn from feedback through both hard and soft rules. | No | No | ||||||||||||||
| Department Of Homeland Security | CBP | DHS-2451 | Position Description Generation and Evaluation | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Generative AI | Federal agencies often face challenges in creating accurate, consistent, and well-structured position descriptions (PDs) due to limited resources, the time-intensive nature of the classification process, and varying levels of expertise among HR staff and Hiring Managers. Inaccurate or poorly written PDs can lead to misclassification, legal disputes, grievances, and difficulties attracting qualified candidates, ultimately impacting workforce quality and agency performance. | ClassifAI is designed to streamline and enhance the position description (PD) creation and classification process by leveraging generative AI to produce drafts of accurate, consistent, and standards-compliant PDs. By reducing administrative burdens, improving PD quality, and minimizing classification risks, ClassifAI enables agencies to optimize workforce management, attract top talent, and achieve greater operational efficiency with fewer resources. | ClassifAI generates accurate, standards-compliant drafts of position descriptions (PDs) with tailored classification recommendations, robust and customizable language, and supporting documentation. | 16/05/2025 | c) Developed with both contracting and in-house resources | Starlo and Deloitte | No | ClassifAI generates accurate, standards-compliant drafts of position descriptions (PDs) with tailored classification recommendations, robust and customizable language, and supporting documentation. | Publicly available position descriptions (primarily from the DoD), OPM standards and guidelines (e.g., the OPM classifier’s handbook), and CBP position descriptions. | No | Yes | ||||||||||||||
| Department Of Homeland Security | CBP | DHS-2529 | Global Entry Mobile App Traveler AI Question Answering Service | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Travelers submitting a question to the Global Entry support team. | Faster workflow, faster customer response time, reduce the number of people it takes to address customer concerns, less expensive. | The output is the answer to a traveler's question. The AI model is directed to answer the question from the context information we give it, verbatim. | 16/06/2025 | b) Developed in-house | Yes | The output is the answer to a traveler's question. The AI model is directed to answer the question from the context information we give it, verbatim. | Previous production traveler questions that were sanitized and anonymized, mock traveler questions. | Yes | Yes | |||||||||||||||
| Department Of Homeland Security | CBP | DHS-2530 | ChatCBP | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | Workforce enablement and efficiency. | Improving Operational Efficiency: Automating information retrieval, reducing manual review time, and streamlining workflows will lead to significant time savings and increased productivity for CBP personnel. This translates to cost savings and allows agents to focus on higher-priority tasks. _x000D_ _x000D_ Enhancing Decision-Making: Providing quick and accurate access to relevant information will improve the quality and consistency of decisions across the agency. _x000D_ _x000D_ Increasing Mission Effectiveness: By applying LLM capabilities to critical use cases like hot list review and violation coding, we can enhance accuracy, reduce errors, and improve overall mission success rates. | Generative LLM that will allow users to upload, search, delete, summarize documents, conduct advanced searches, and identify similar language in documents and receive natural language-style outputs in response to their prompts. | 30/07/2025 | b) Developed in-house | No | Generative LLM that will allow users to upload, search, delete, summarize documents, conduct advanced searches, and identify similar language in documents and receive natural language-style outputs in response to their prompts. | Internal CBP document samples are used to test the efficacy and accuracy of performance of chatCBP. | Yes | Yes | |||||||||||||||
| Department Of Homeland Security | CBP | DHS-2704 | GenAI for Document Summarization and Content Generation | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Workforce enablement and efficiency through the implementation of Generative AI / Large Language Models (LLM) | Improving Operational Efficiency: Automating information retrieval, reducing manual review time, and streamlining workflows will lead to significant time savings and increased productivity for CBP personnel. This translates to cost savings and allows agents to focus on higher-priority tasks. Enhancing Decision-Making: Providing quick and accurate access to relevant information will improve the quality and consistency of decisions across the agency. | Generative LLM applications deployed in a stand alone capacity or embedded in existing systems that will allow users to upload, search, summarize documents, conduct advanced searches, and identify similar language in documents and receive natural language-style outputs in response to their prompts. | 01/03/2025 | c) Developed with both contracting and in-house resources | Meta, OpenAI, Google, Anthropic | No | Generative LLM applications deployed in a stand alone capacity or embedded in existing systems that will allow users to upload, search, summarize documents, conduct advanced searches, and identify similar language in documents and receive natural language-style outputs in response to their prompts. | The commercial LLMs used for this use case were trained using a diverse range of publicly available data, including text from books, articles, websites, and other sources and data types. | No | Yes | ||||||||||||||
| Department Of Homeland Security | CBP | DHS-399 | Cyber Threat Analysis (Recorded Future) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The system uses the Recorded Future's platform to streamline CBP Cyber Threat Intelligence (CTI) operations by automating the identification and analysis of relevant cyber threat activity and leverages AI/ML to transform unstructured text into structured data using natural language processing, classify events and entities to prioritize threats, forecast events through predictive modeling, and represent structured knowledge using ontologies. It enables analysts to rapidly assess vulnerabilities in CBP’s IT environment, identify adversary data, and generate cyber risk assessments, providing actionable intelligence to enhance efficiency and support Security Operations and Cyber Risk Management investigations. | Cyber Threat Analysis quickly populates query results when searching against adversary tactics, techniques, and procedures, establishing a threat scorecard. This service can also provide cyber risk scorecards for third party vendors, companies, and organizations. | Actionable intelligence supporting Security Operations Center (SOC) and Cyber Risk Management (CRM) investigations and reports. | 20/06/2024 | a) Purchased from a vendor | Recorded Future | No | Actionable intelligence supporting Security Operations Center (SOC) and Cyber Risk Management (CRM) investigations and reports. | Recorded Future AI is trained on over 10 years of threat analysis from Insikt Group, the company’s threat research division, and is combined with the insights of the Recorded Future Intelligence Graph. | No | No | ||||||||||||||
| Department Of Homeland Security | CBP | DHS-68 | Empty Container Detection Model (Cargo Insights Team) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This enhances border security and optimizes resource allocation for inspections. The model is designed to accurately identify and track empty containers in cargo shipments, preventing errors and fraud in cargo declarations. | The AI improves accuracy, enhances efficiency by prioritizing legitimate containers for inspection, and strengthens security by detecting potential smuggling risks. | The system applies a prediction label alongside a bounding box on record. Officers use this information along with all information provided to determine what, if any, further steps are required. | 24/06/2023 | b) Developed in-house | Yes | The system applies a prediction label alongside a bounding box on record. Officers use this information along with all information provided to determine what, if any, further steps are required. | X-ray images and associated metadata. | No | Yes | |||||||||||||||
| Department Of Homeland Security | CBP | DHS-69 | Commodity Detection Model (Cargo Insights Team) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Computer Vision | This enhances border security and optimizes resource allocation for inspections. Analyze X-Ray images and predict the commodity code, reducing the need for users to manually enter commodity codes. | This project leverages computer vision with object detection and a neural network to analyze X-Ray images and predict the commodity code. | The system applies a prediction label alongside a bounding box on record. Officers use this information along with all information provided to determine what, if any, further steps are required. | 21/01/2025 | b) Developed in-house | Yes | The system applies a prediction label alongside a bounding box on record. Officers use this information along with all information provided to determine what, if any, further steps are required. | X-ray images and associated metadata. | No | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | |||||||||||||
| Department Of Homeland Security | CBP | DHS-94 | Cargo Classification Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The need to improve data classification, enable better machine learning integration, and facilitate nuanced trade entity risk assessments. By categorizing goods based on their descriptions and characteristics, these classifiers help identify potential threats associated with specific cargo types associated with prior violations. | CBP’s Cargo Classification Tool improves trade compliance and enhances cargo risk assessment by streamlining the classification of goods, enabling better integration with machine learning systems, and refining entity risk evaluations. It identifies potential threats linked to specific cargo types and prior violations by categorizing goods based on their descriptions and attributes. These improvements contribute to faster, more accurate classification and risk-based targeting, which strengthens security and facilitates trade. | The Cargo Classification Tool produces outputs that map cargo commodity descriptions to their most probable tariff codes, enhancing classification accuracy. These outputs integrate seamlessly into broader threat-specific risk models, providing features to support predictive risk assessments in cargo security. | 01/10/2020 | b) Developed in-house | Yes | The Cargo Classification Tool produces outputs that map cargo commodity descriptions to their most probable tariff codes, enhancing classification accuracy. These outputs integrate seamlessly into broader threat-specific risk models, providing features to support predictive risk assessments in cargo security. | This model leverages data provided by carriers within the Automated Commercial Environment (ACE), as well as transformations of that data within the Automated Targeting System (ATS). | Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | |||||||||||||
| Department Of Homeland Security | CISA | DHS-106 | Critical Infrastructure Network Anomaly Detection | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | CyberSentry currently ingests hundreds of terabytes of data from Critical Infrastructure Partners every single day. At petabyte-scale over all collected network data, CyberSentry required a way to filter through the noise to be able to detect Advanced Persistent Threat (APT) and Nation State malicious activity happening within our Partners' networks. CyberSentry has developed numerous machine learning-based detections to identify trends, patterns, and anomalies in network data that ultimately result in both automated and manual triage by analysts. | This use case delivers improved internal government tools for hunting and detection of malicious threat actors on critical infrastructure networks. It automates manual data fusion and correlation processes and highlights potential anomalies, allowing CISA analysts to focus more time on hunting adversaries. | An interface is provided for analysts to query cybersecurity data, and dashboards are provided with potential cybersecurity alerts, including anomalies detected through predictive models and rule-based heuristics. | 10/01/2022 | b) Developed in-house | Yes | An interface is provided for analysts to query cybersecurity data, and dashboards are provided with potential cybersecurity alerts, including anomalies detected through predictive models and rule-based heuristics. | Cybersecurity cloud, network and host logs; Cybersecurity threat intelligence (CTI) | No | https://www.dhs.gov/publication/dhscisapia-037-cybersentry | Yes | https://www.dhs.gov/publication/dhscisapia-037-cybersentry | |||||||||||||
| Department Of Homeland Security | CISA | DHS-2306 | CISAChat | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Currently, retrieving and synthesizing content from hundreds of government documents is a slow, manual process. CISA users need the capability to answer questions, generate summaries, and produce textual responses efficiently using a broad range of information sources—from CISA publications to DHS mandates. AI can streamline and accelerate this workflow by automating information extraction and response generation. | Currently, multiple CISA program offices are using contractor staff to review pre-production content and other internal materials to develop summaries, key themes, and improve clarity. Leveraging a Generative AI solution improves internal agency Customer Experience (CX) and saves staff time. | LLM generated response to the questions posed on uploaded content. | 06/05/2025 | a) Purchased from a vendor | Microsoft | No | LLM generated response to the questions posed on uploaded content. | Current data used is pre-publication content that has already been approved. | No | Yes | ||||||||||||||
| Department Of Homeland Security | CISA | DHS-4 | Automated Detection of Personally Identifiable Information (PII) in Cybersecurity Data | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To enhance privacy, this AI tool uses Natural Language Processing (NLP) to automatically flag potential PII for review and removal by CISA analysts. | Automated PII Detection and Review Process uses analytics to identify and manage potential PII in submissions. If PII is flagged, the submission is sent to CISA analysts, who are guided by AI to review and confirm or reject the detection, redacting information if necessary. Privacy experts monitor the system and provide feedback. The system learns from this feedback, ensuring compliance with privacy regulations and improving efficiency by reducing false positives. Regular audits ensure the process remains trustworthy and effective. | The system sends flagged data rows with potential PII to humans for review. | 01/12/2020 | c) Developed with both contracting and in-house resources | Nightwing Intelligence Solutions, LLC. Procurement Instrument ID affiliated with this use case: 70QS0124C00000002 | Yes | The system sends flagged data rows with potential PII to humans for review. | Cybersecurity indicators of compromise (IOCs), Cybersecurity threat intelligence (CTI) | Yes | https://www.dhs.gov/publication/dhsnppdpia-029-automated-indicator-sharing | Yes | https://www.dhs.gov/publication/dhsnppdpia-029-automated-indicator-sharing | ||||||||||||
| Department Of Homeland Security | FEMA | DHS-2296 | OCFO Response Augmentation Suite | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | FEMA Office of the Chief Financial Officer Generative Pre-trained Transformer (OCFO GPT), Travel Policy GPT and Fiscal Policy GPT are internal Generative AI (GenAI) tools designed to support the FEMA workforce by generating initial responses to various queries. These tools leverage relevant public and internal documents to draft preliminary responses, which are then refined prior to formal submission. FEMA OCFO GPT generates initial responses to questions for the record, leveraging public and internal documents, and provides a preliminary response to the Program Office to use in their formal response to the request. It reduces the data gathering stage, saving analysts 80% of the initial effort. Travel Policy GPT generates initial responses to questions regarding FEMA/DHS Travel Policy, including the JTR, and provides a preliminary response to the travel specialist to use in their formal response to the queries. It improves response times, saving users 80-90% of the time compared to regular engagement with the Travel Service Center. Fiscal Policy GPT provides preliminary responses to questions regarding FEMA/DHS Fiscal Policy and will generate a draft response with references to assist FEMA internal workforce in compliance with established policy. It saves users 80-90% of the time compared to regular engagement with DHS /FEMA OCFO policy and speeds up resolution times. | FEMA OCFO GPT-B is an internal facing GenAI tool to augment the FEMA workforce in generating initial responses to questions for the record and providing a preliminary response to the Program Office to use in their formal response to the request. FEMA OCFO GPT provides draft responses reducing the data gathering stage and providing additional time for analysis, response, and approval. This has reduced the analyst initial level of effort versus individual research by 80% on initial surveys. Travel Policy GPT is an internal facing GenAI tool to augment the FEMA workforce in generating initial responses to questions regarding FEMA/DHS Travel Policy, and providing a preliminary response to the travel specialist to use in their formal response to the queries. The tools provide improved responses saving the end user time versus regular engagement with the Travel Service Center. The tool also allows travelers to ask specific questions that require Travel Service Center engagement, limiting the needed triage and speeding resolution times. Fiscal Policy GPT is an internal facing GenAI tool to augment the FEMA workforce in providing preliminary responses to questions regarding FEMA/DHS Fiscal Policy. The tool provides improved responses saving the end user time versus regular engagement with the DHS/FEMA OCFO Policy. The tool also allows internal users to ask specific questions that require Fiscal Policy engagement, limiting the needed triage and speeding resolution times. | FEMA OCFO GPT is an internal facing GenAI tool to augment the FEMA workforce in generating initial responses to questions for the record and providing a preliminary response to the Program Office to use in their formal response to the request. FEMA OCFO GPT leverages public facing and internal deliberative documents to assist in answering questions the Agency receives. The tool generates a draft response that is then refined/updated prior to providing a formal response. Travel Policy GPT is an internal facing GenAI tool to augment the FEMA workforce in generating initial responses to questions regarding FEMA/DHS Travel Policy and providing a preliminary response to the travel specialist to use in their formal response to the queries. The tool generates a draft response that is then refined/updated prior to providing a formal response. Fiscal Policy GPT is an internal facing GenAI tool to augment the FEMA workforce in providing preliminary responses to questions regarding FEMA/DHS Fiscal Policy. Fiscal Policy GPT will leverage the DHS FMPM and FEMA Fiscal Policy documents. The tool is planned to generate a draft response with references to assist FEMA internal workforce in compliance with established policy. These tools are leveraged in the data gathering stage and do not replace any current analysts work or leadership review, as required, prior to submittal via any formal request for information process. | 01/02/2024 | c) Developed with both contracting and in-house resources | Microsoft (OpenAI Azure Commercial Cloud offerings) | Yes | FEMA OCFO GPT is an internal facing GenAI tool to augment the FEMA workforce in generating initial responses to questions for the record and providing a preliminary response to the Program Office to use in their formal response to the request. FEMA OCFO GPT leverages public facing and internal deliberative documents to assist in answering questions the Agency receives. The tool generates a draft response that is then refined/updated prior to providing a formal response. Travel Policy GPT is an internal facing GenAI tool to augment the FEMA workforce in generating initial responses to questions regarding FEMA/DHS Travel Policy and providing a preliminary response to the travel specialist to use in their formal response to the queries. The tool generates a draft response that is then refined/updated prior to providing a formal response. Fiscal Policy GPT is an internal facing GenAI tool to augment the FEMA workforce in providing preliminary responses to questions regarding FEMA/DHS Fiscal Policy. Fiscal Policy GPT will leverage the DHS FMPM and FEMA Fiscal Policy documents. The tool is planned to generate a draft response with references to assist FEMA internal workforce in compliance with established policy. These tools are leveraged in the data gathering stage and do not replace any current analysts work or leadership review, as required, prior to submittal via any formal request for information process. | Budget Exhibits, Passback Materials, Hearing Testimony, Questions Received, Answers Provided Travel Policy Documents (Joint Travel Regulation (JTR), DHS Travel Policy, FEMA Travel Policy) Fiscal Policy Documents - Treasury Financial Manual (TFM), DHS Financial Policy Manual, FEMA Fiscal Policies | No | Yes | ||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2441 | OCFO Code Assist GPT | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Code Assist Generative Pre-trained Transformer (GPT) is an internal facing Generative AI (GenAI) tool to augment the FEMA workforce in generating and troubleshooting existing queries in established query languages (e.g., SQL, Java, COBOL). Users enter the language they are querying, and the Code Assist GPT then provides a proposed query based on the elements provided. If the query is unsuccessful, the tool maintains the session, allowing users to prompt for enhancements until expected results are achieved. At the end of the session, all prompts and queries are removed, and no data is stored outside of the active session. The tool provides improves query generation and rapid iteration, saving users 80-90% of the time compared to custom query development. It also supports various computer languages to assist the data analytics community. | Saving users 80-90% of the time compared to custom query development, rapid iteration which includes error resolution to query complex data sets and multiple computer languages (COBOL, SQL, Python, etc) | Leverages Azure Commercial OpenAI within the FEMA system boundary, currently leveraging ChatGPT4o. Code Assist GPT is an internal facing GenaAI tool to augment the FEMA workforce in generating and troubleshooting existing queries in established query languages. The user simply enters the language they are trying to query (i.e. SQL, Java, COBOL, etc) and the Code Assist GPT then provides a proposed query based on the elements provided by the user. If the user experiences an error or the query is not successful, the Code Assist GPT maintains the session as long as the user is still logged in and hasn't restarted the session, the user can continue to prompt the GPT to provide enhancements to the provided query until results are as expected. At the end of the user session, all prompts/queries are removed and no data is stored outside of the active user session. | 11/04/2025 | c) Developed with both contracting and in-house resources | Microsoft (OpenAI part of Azure Commercial Cloud offerings) | Yes | Leverages Azure Commercial OpenAI within the FEMA system boundary, currently leveraging ChatGPT4o. Code Assist GPT is an internal facing GenaAI tool to augment the FEMA workforce in generating and troubleshooting existing queries in established query languages. The user simply enters the language they are trying to query (i.e. SQL, Java, COBOL, etc) and the Code Assist GPT then provides a proposed query based on the elements provided by the user. If the user experiences an error or the query is not successful, the Code Assist GPT maintains the session as long as the user is still logged in and hasn't restarted the session, the user can continue to prompt the GPT to provide enhancements to the provided query until results are as expected. At the end of the user session, all prompts/queries are removed and no data is stored outside of the active user session. | Programming languages were used to test and fine tune the model, such as JAVA Script, COBOL, SQL, Python. The tool leverages the ChatGPT4o model with the inherent capability.The tool was validated with extensive User Acceptance Testing validating inputs and outputs against similar queries that already were written and performed manually. This was then validated with a pilot group of users with an expansion to other user groups for use. | No | Yes | ||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2710 | Executive Summary GPT | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Provides the ability for users to upload documents and provide an executive summary on those documents. | Quickly analyze lengthy or complex documents for relevance to FEMA and/or to provide a high level summary for leadership on potential impacts. | Leverages Azure Commercial OpenAI within the FEMA system boundary, currently leveraging ChatGPT4o. A summary of a document or documents that can be leveraged for high level summaries for leadership or potential impacts. | 16/04/2025 | a) Purchased from a vendor | Microsoft (Azure Commercial OpenAI Cloud Offerings) | Yes | Leverages Azure Commercial OpenAI within the FEMA system boundary, currently leveraging ChatGPT4o. A summary of a document or documents that can be leveraged for high level summaries for leadership or potential impacts. | Sample documents were provided and executive summary was reviewed for relevance/accuracy. | No | Yes | ||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2720 | Public Assistance Workload Projections | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Emergency Management | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The use case is predicting recovery program quantities of interest using supervised learning models to include predicting the number of applicants who will apply for Public Assistance, predicting the number of PA projects that applicants will submit, predicting the number of sites that will need to be inspected per PA project, predicting the cost of delivering assistance, etc. Supervised learning models include but are not limited to the use of sample statistics, generalized linear models, decision trees, and deep neural networks for the purpose of predicting unknown quantities. | 1. For informational purposes: the models will produce predictions for to-be-determined quantities of interest. These quantities are often of interest to Agency personnel in the field, region, and headquarters, as well as DHS, OMB, NSC, and the White House._x000D_ 2. For decisional purposes: in addition to being informative, the model’s predictions are likely to be used for decision making. Projections help inform staffing levels and timing. | Supervised learning models produce predictions, not recommendations and not decisions (though they can be used to inform human users in making recommendations and decisions)._x000D_ _x000D_ A minimum, these supervised learning models will produce point predictions for the different quantities of interest for disaster declarations. Additionally supervised learning models may produce prediction intervals or predictive distributions as feasible and appropriate for the given prediction problem. Often these outputs will be shared via business intelligence tools (e.g., Tableau or PowerBI) for wide internal FEMA use. Some predictions may be shared to a more restricted audience through simpler means (e.g., an excel workbook) as appropriate. | 01/10/2018 | b) Developed in-house | No | Supervised learning models produce predictions, not recommendations and not decisions (though they can be used to inform human users in making recommendations and decisions)._x000D_ _x000D_ A minimum, these supervised learning models will produce point predictions for the different quantities of interest for disaster declarations. Additionally supervised learning models may produce prediction intervals or predictive distributions as feasible and appropriate for the given prediction problem. Often these outputs will be shared via business intelligence tools (e.g., Tableau or PowerBI) for wide internal FEMA use. Some predictions may be shared to a more restricted audience through simpler means (e.g., an excel workbook) as appropriate. | FEMA: Historical Declaration and Public Assistance activity data_x000D_ U.S. Census: Housing Units Logged, Density Housing Units, Number City/Township Govs, Number of Special District Govs_x000D_ DHS Infrastructure: Fire Stations, Electric Substations, Dams, Ten Mile Power Lines_x000D_ Dept of Agriculture: Agricultural Land (sq. miles), Wetland (sq. miles), Developed Land (sq. miles) | No | Yes | |||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2722 | Individual Assistance (IA) Predictive Models for Program Quantities | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Emergency Management | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The use case is predicting recovery program quantities of interest using supervised learning models to include predicting the number of applicants who will apply for Individual Assistance, how many inspections will be issued, and how many units are required for direct housing. Supervised learning models include but are not limited to the use of sample statistics, generalized linear models, decision trees, and deep neural networks for the purpose of predicting unknown quantities. | The models are intended to quickly quantify and reduce uncertainty around key quantities of interest to enable better programmatic decision making, such as workload management, pre-placement of staff, etc. | The outputs are the predicted values for the quantities of interest, e.g. number of survivors who will register for assistance, number of inspections issued, etc. | 01/02/2019 | b) Developed in-house | No | The outputs are the predicted values for the quantities of interest, e.g. number of survivors who will register for assistance, number of inspections issued, etc. | Historical data obtained from the National Emergency Management Information System (NEMIS); _x000D_ Decennial Census and American Community Survey household data | No | Yes | |||||||||||||||
| Department Of Homeland Security | ICE | DHS-2424 | AI Assisted Compromise Email Detector (AACED) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This use case intends to solve the problem of the extensive manual effort required to review emails for signs of cyber compromise. | The use case was developed to assist ICE SOC in reviewing a collection of emails between ICE personnel and Microsoft that were part of Emergency Directive 24-02. The use case provides a faster mechanism to the SOC analysts to determine indicators of compromise, reducing the level of effort for these individuals’ analysis exponentially. To assist the analysts, Named Entity Recognition (NER) was used to detect PII and other associated keywords to increase analyst productivity, and reduce time required to analyze emails. | Outputs are named entities and generated text for specific questions. Chat interface for analyst to conduct Q&A with email as context. | 01/06/2024 | b) Developed in-house | No | Outputs are named entities and generated text for specific questions. Chat interface for analyst to conduct Q&A with email as context. | Stored Agency emails used for validation. | Yes | Yes | |||||||||||||||
| Department Of Homeland Security | ICE | DHS-2425 | Intelligent Document Processing | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Computer Vision | This use case intends to solve the problem of the manual effort required to validate and extract data from forms. | Business units within ICE leverage these services to automate repeatable, time-consuming processes such as invoice processing, and form entry validation and extraction. This platform will provide Optical Character Recognition and machine learning models to verify, extract, and classify information from ICE forms. Using AI to provide information extraction for these processes saves ICE personnel a significant amount of time while improving data quality and enabling automation. | This platform will provide Optical Character Recognition and machine learning models to verify, extract, and classify information from ICE forms. | 01/06/2019 | c) Developed with both contracting and in-house resources | UiPath; Microsoft; Apryse; Personable Inc. | Yes | This platform will provide Optical Character Recognition and machine learning models to verify, extract, and classify information from ICE forms. | ICE’s document understanding platforms provide out-of-the box models to verify and/or extract information from a variety of document types. These platforms also include the ability to create an ML feedback loop to tailor the models to try to improve the accuracy in extracting data fields from multiple document types. In this scenario, ICE would fine-tune the document understanding model using the documents submitted to the understanding workflow. | Yes | Yes | ||||||||||||||
| Department Of Homeland Security | ICE | DHS-2426 | Cybersecurity Threat Management, Detection, and Response | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This use case intends to solve the problem of detecting and responding to cybersecurity threats in a timely manner. | These AI capabilities provide security analysts with modern tools to identify and respond to threats much more quickly than previously possible, minimizing potential damage to systems and data. | CD&I uses several AI-enabled cybersecurity tools to analyze this data. Machine learning (ML) models, such as classification and regression models, are used to analyze historical data and detect emerging threats through pattern recognition. Other ML capabilities include continuous monitoring of ICE cybersecurity data and the algorithmic identification of real-time cyber threats. This includes recognizing phishing patterns, malware signatures, or abnormal network traffic patterns across a variety of tools. Additionally, CD&I is in the process of integrating its open-source intelligence cybersecurity threat analysis platform with an LLM. This integration will allow platform users to summarize open-source intelligence on cybersecurity threats and more easily research and respond to potential cybersecurity events. | 16/03/2021 | a) Purchased from a vendor | Illumio; AttackIQ; Cofense; Splunk; Crowdstrike; Polarity | Yes | CD&I uses several AI-enabled cybersecurity tools to analyze this data. Machine learning (ML) models, such as classification and regression models, are used to analyze historical data and detect emerging threats through pattern recognition. Other ML capabilities include continuous monitoring of ICE cybersecurity data and the algorithmic identification of real-time cyber threats. This includes recognizing phishing patterns, malware signatures, or abnormal network traffic patterns across a variety of tools. Additionally, CD&I is in the process of integrating its open-source intelligence cybersecurity threat analysis platform with an LLM. This integration will allow platform users to summarize open-source intelligence on cybersecurity threats and more easily research and respond to potential cybersecurity events. | The cybersecurity solutions use commercially available large language models that have been trained on the public domain by their providers. There was no additional training using agency data on top of what is available in the models’ base set of capabilities. During operation, the AI models interact with ICE production data from multiple sources, including data from Microsoft Defender for Office (Polarity) and suspected malicious reported emails from ICE personnel (Triage). | Yes | No | ||||||||||||||
| Department Of Homeland Security | ICE | DHS-2515 | AI-Enhanced ICE Tip Processing | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Generative AI | This use case intends to solve the problem of the time-consuming manual effort required to review and categorize incoming tips. | The use of AI in this process enables the Tip Line team to more quickly identify and action tips recommended for urgent case categories. Additionally, the introduction of a BLUF field saves time by providing analysts with a high-level understanding of a tip before they review its details. | This solution uses a large language model (LLM) to enrich web tips with two additional data elements: (1) a high-level summary of the tip (BLUF), and (2) a recommended case category. The LLM generates BLUFs in English, regardless of the language used in the raw tip submission. For non-English tips, analysts may click a button to translate the full tip violation summary data element into English. The LLM is configured to only recommend case categories from a list of predefined HSI case categories. | 02/05/2025 | a) Purchased from a vendor | Palantir | Yes | This solution uses a large language model (LLM) to enrich web tips with two additional data elements: (1) a high-level summary of the tip (BLUF), and (2) a recommended case category. The LLM generates BLUFs in English, regardless of the language used in the raw tip submission. For non-English tips, analysts may click a button to translate the full tip violation summary data element into English. The LLM is configured to only recommend case categories from a list of predefined HSI case categories. | The system uses commercially available large language models trained on the public domain data by their providers. There was no additional training using agency data on top of what is available in the models’ base set of capabilities. During operation, the AI models interact with tip submissions. | Yes | Yes | ||||||||||||||
| Department Of Homeland Security | ICE | DHS-2553 | Media Classifier for Computer and Digital Storage Evidence | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Computer Vision | The AI is intended to solve the problem of analysts having to manually sort and review large volumes of media files from digital storage devices, which makes it difficult and time‑consuming to organize evidence and identify potentially relevant material. | By automating the initial classification of large volumes of media evidence, the platform enables Homeland Security Investigations personnel to more efficiently identify and review potentially relevant information, improving the overall effectiveness of digital investigations. | The platform incorporates a machine learning model that classifies media evidence from lawfully obtained computer and digital storage devices and suggests category tags based on user-selected categories, such as cars, drugs, or weapons. | 29/08/2023 | a) Purchased from a vendor | Cellebrite | No | The platform incorporates a machine learning model that classifies media evidence from lawfully obtained computer and digital storage devices and suggests category tags based on user-selected categories, such as cars, drugs, or weapons. | The vendor did not provide information on the data sets used to train its models. However, HSI does not provide the vendor with any agency data to train, fine-tune, or evaluate performance of the model(s) used in this use case. | Yes | No | ||||||||||||||
| Department Of Homeland Security | ICE | DHS-2758 | AI-Powered Developer Tools | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the problem of time‑consuming manual developer tasks, including debugging code, writing database queries, and analyzing system metrics, which slows application development and makes it harder to quickly identify and fix issues. | By streamlining routine coding tasks and surfacing useful system insights, AI-enabled developer tools increase developer productivity and support faster, higher-quality delivery of software across ICE. | The outputs of these AI-enabled tools include suggested code snippets, refactoring recommendations, optimized queries, and analytics summaries related to system behavior or performance. These outputs are presented to developers as proposed changes or insights, which must be reviewed and approved before being incorporated into the codebase through existing version control and deployment processes. The tools do not directly modify production systems; all changes must go through standard human review, testing, and approval workflows. | 15/04/2025 | a) Purchased from a vendor | Palantir | Yes | The outputs of these AI-enabled tools include suggested code snippets, refactoring recommendations, optimized queries, and analytics summaries related to system behavior or performance. These outputs are presented to developers as proposed changes or insights, which must be reviewed and approved before being incorporated into the codebase through existing version control and deployment processes. The tools do not directly modify production systems; all changes must go through standard human review, testing, and approval workflows. | The system uses commercially available large language models that have been trained on the public domain by their providers. There was no additional training using agency data on top of what is available in the models’ base set of capabilities. | Yes | No | ||||||||||||||
| Department Of Homeland Security | ICE | DHS-2759 | Open-Source Intelligence for Investigations | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the problem of analysts having to manually search and make sense of vast amounts of multilingual, multimodal publicly available online data, which makes it difficult to efficiently identify relevant identifiers, high‑risk content, and patterns needed to support Homeland Security Investigations (HSI) investigations. | These AI tools significantly reduce the time and effort required to sift through large datasets, improve the ability to uncover relevant information, and enhance the overall efficiency and effectiveness of HSI’s investigative operations. | The outputs of these platforms include flagged risk alerts, extracted identifiers, image and sentiment classification, and suggested investigative leads. These platforms do not perform biometric identification, facial recognition for identity verification, autonomous targeting, or automated enforcement actions. All AI-enabled outputs are subject to mandatory human-in-the-loop review prior to any investigative, operational, or enforcement action. | 01/09/2023 | a) Purchased from a vendor | Penlink, Fivecast | No | The outputs of these platforms include flagged risk alerts, extracted identifiers, image and sentiment classification, and suggested investigative leads. These platforms do not perform biometric identification, facial recognition for identity verification, autonomous targeting, or automated enforcement actions. All AI-enabled outputs are subject to mandatory human-in-the-loop review prior to any investigative, operational, or enforcement action. | The AI platforms used for open-source intelligence investigations rely on pre-trained large language models, natural language processing models, and other third-party AI services. These models are trained on publicly available and commercially licensed data. No DHS or agency data is used to train, fine-tune, or develop the AI models. | Yes | https://www.dhs.gov/sites/default/files/2024-11/24_1126_priv_pia_ice064_socialmedia.pdf | Race/Ethnicity, Sex/Gender, Age | No | https://www.dhs.gov/sites/default/files/2024-11/24_1126_priv_pia_ice064_socialmedia.pdf | |||||||||||
| Department Of Homeland Security | ICE | DHS-P1 | Normalization Services | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This use case intends to solve the problem of data redundancies, inconsistencies difficult to integrate data that hinders the efficiency and accuracy of investigations. | HSI utilizes artificial intelligence to enhance data accuracy and efficiency by verifying, validating, correcting, and normalizing various types of information, including addresses, phone numbers, names, and ID numbers. This process helps to eliminate data entry errors, detect intentional misidentification, and connect related information across multiple datasets, ultimately reducing the time and resources required for investigations. The machine learning-powered normalization services offered by HSI include converting ambiguous addresses into usable formats, identifying ID types from partial information, categorizing names with complex suffixes and family names, and standardizing phone numbers to the E164 format, including determining their originating county. By normalizing and improving the quality of investigative datasets, HSI is able to use more advanced tools to find correlations and leads that would have otherwise gone undetected without extensive manual effort. | The output includes normalized data that improves search capability during investigations. This includes normalizing data to update less well-defined addresses into usable addresses for analysis- (such as those using mile markers instead of a street number); inferring ID type based on user-provided ID value (such as distinguishing a SSN from a DL number without additional context); categorizing name parts while taking into account additional factors (including generational suffixes and multi-part family names); and validating and normalizing phone numbers to the E164 standard, including their identified county of origin. | 01/04/2021 | c) Developed with both contracting and in-house resources | Booz Allen | Yes | The output includes normalized data that improves search capability during investigations. This includes normalizing data to update less well-defined addresses into usable addresses for analysis- (such as those using mile markers instead of a street number); inferring ID type based on user-provided ID value (such as distinguishing a SSN from a DL number without additional context); categorizing name parts while taking into account additional factors (including generational suffixes and multi-part family names); and validating and normalizing phone numbers to the E164 standard, including their identified county of origin. | Data holdings within HSI case files that require normalization (e.g. subpoenaed phone record). This includes, but is not limited to, evidentiary records containing phone numbers, names, addresses, and ID numbers. | Yes | https://www.dhs.gov/sites/default/files/2025-06/25_0618_priv_pia-ice-055-raven-appendix-update.pdf | Yes | https://www.dhs.gov/sites/default/files/2025-06/25_0618_priv_pia-ice-055-raven-appendix-update.pdf | ||||||||||||
| Department Of Homeland Security | MGMT | DHS-2433 | DHS-Chat | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | DHS personnel need a reliable solution to quickly access accurate information, process documents, and support work tasks within the DHS Workspace. Existing workflows are often time-consuming and inefficient, and there is a need to streamline operations while maintaining compliance with security requirements. | This is a chatbot based on a Large Language Models (LLM) for internal DHS employee use - It's like chat-GPT for DHS but approved for use with non-classified but internal information (this includes FOUO (For Official Use Only) CUI (Controlled Unclassified Information) due to its improved security compared to publicly available chatbots. This tool is able to dynamically create written content through text prompts submitted by the user. Approved applications of this tool to DHS business include generating first drafts of documents that a human would subsequently review, conducting and synthesizing research on open-source information and internal documents, and developing briefing materials or preparing for meetings and events. | The internally available generative AI tool outputs text based on the users input. | 12/12/2024 | b) Developed in-house | Yes | The internally available generative AI tool outputs text based on the users input. | No agency-owned data was used to train, fine-tune, or evaluate the model. The model was trained on publicly available datasets and general knowledge up to the specified cutoff date. No DHS-specific or agency-owned data was incorporated during model development. | No | Yes | |||||||||||||||
| Department Of Homeland Security | MGMT | DHS-2453 | ESEC Inquiry (STORM) Summarization | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | This use case is intended to decrease the time it takes to enter incoming requests into Storm. By summarizing the request, analysts are able to more quickly take action and assign to relevant parties. It will also lead to future capabilities to draft responses to those requests. | ESEC-STORM AI employs Generative Artificial Intelligence (Gen-AI) technology to automate the creation of document summaries. This advanced system facilitates the automatic integration of these summaries into the System of Tracking, Operations, and Record Management (STORM), thereby optimizing the management of correspondence and information requests. | When a user creates a new work package and uploads a letter, it activates a Power Apps workflow that creates a summary. | 13/12/2024 | c) Developed with both contracting and in-house resources | Microsoft | Yes | When a user creates a new work package and uploads a letter, it activates a Power Apps workflow that creates a summary. | The commercial models used for this use case were trained using a diverse range of publicly available data, including text from books, articles, websites, and other sources and data types. | No | Yes | ||||||||||||||
| Department Of Homeland Security | MGMT | DHS-419 | AdaptiveMFA | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Enhances security and the user experience, including: Behavioral Anomaly Detection - AI monitors user activity over time to establish a baseline pattern of their behavior. If a login attempt deviates significantly, such as logging in from a foreign country, unknown workstation, or disallowed IP address space, Okta can trigger Adaptive Multi-factor Authentication (aMFA) to block access. Adaptive MFA - Okta's AI-powered Adaptive MFA tailors security challenges based on risk level. Trusted users might only need a password, while high-risk users are prompted for biometric or application-based verification. Real-Time Threat Detection - Okta's integration with AI-driven threat intelligence platforms like CrowdStrike and Microsoft Defender enhance real-time visibility into threats, correlating data from endpoint, network, and identity layers. Access Governance with Intelligence - AI enables smarter access reviews and role recommendations. It detects unusual access rights, flags overprovisioned users, and automatically suggests changes. AI is integrated into DHS's identity and access management solutions to strengthen security and enhance the user experience. | Adaptive Multi-factor Authentication (aMFA) introduces additional intelligence into Identity flows by taking into account the authentication context data during the authentication. Using the data, DHS is able to adapt security and authentication policies to enhance the security to DHS systems. | The input includes: device, network, location, travel, IP, and external data from third parties and endpoint security integrations. The output are Risk ratings HIGH, MEDIUM, LOW for each authentication attempt which can be configigured to require stricted access comtrol policies. | 30/03/2025 | c) Developed with both contracting and in-house resources | Okta | Yes | The input includes: device, network, location, travel, IP, and external data from third parties and endpoint security integrations. The output are Risk ratings HIGH, MEDIUM, LOW for each authentication attempt which can be configigured to require stricted access comtrol policies. | The system collects signal data during authentication to dynamically build behaviors of the user. | No | No | ||||||||||||||
| Department Of Homeland Security | MGMT | DHS-45 | Text Analytics for Survey Responses (TASR) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Quickly and accurately pulling significant topics and themes from unstructured text responses to DHS internal surveys. | The intended purpose of the AI is to perform topic modeling, sentiment analysis, or other text classification tasks on responses provided to internal staff DHS Pulse Survey questions. Text Analytics for Survey Responses (TASR) is an application for performing Natural Language Processing (NLP) and text analytics on survey responses. It is currently being applied by DHS Office of the Chief Human Capital Officer (OCHCO) to analyze and extract significant topics/themes from unstructured text responses to open-ended questions in the quarterly DHS Pulse Surveys. Results of extracted topics/themes are provided to DHS Leadership to better inform agency-wide efforts to meet employees’ basic needs and improve job satisfaction | The systems outputs include a set of topics inferred or surfaced from the raw text comment data, as well as sentiments or other classifications inferred from the data. | 01/11/2022 | b) Developed in-house | No | The systems outputs include a set of topics inferred or surfaced from the raw text comment data, as well as sentiments or other classifications inferred from the data. | Pulse survey data. | No | Yes | https://github.com/dhs-gov/tasr_lda | ||||||||||||||
| Department Of Homeland Security | TSA | DHS-2432 | Airport Throughput Predictive Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This use case is a predictive model for passenger volume to help with airport staffing. | This project was to create a predictive model for the passenger volume using the Security Operations throughput count from checkpoints to help with airport staffing. | Once a month the data is ingested, the predictive model is trained, and predictions of airport checkpoint throughput are made for the airports. | 01/04/2024 | b) Developed in-house | Yes | Once a month the data is ingested, the predictive model is trained, and predictions of airport checkpoint throughput are made for the airports. | Secure Flight Passenger Data: passenger and airline reservation information received from airlines; PMIS Data: Secure checkpoint throughput counts by airport and checkpoint. | No | Yes | |||||||||||||||
| Department Of Homeland Security | TSA | DHS-2518 | Geographic Current Events Real-Time Alerting Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | TSA is able to analyze and process large volumes of data to evaluate critical events faster and more effectively. | To provide real-time, actionable information by leveraging advanced artificial intelligence (AI) and machine learning algorithms to aggregate and summarize large amounts of publicly available data from social media, news, and other sources. The benefit of this product is that it produces real-time alerts based on a geographic location to include, predictive insights, detection of early signals of significant events, trends, or crises. | Analysts set up geographic boundaries for reporting alerts and make selections of topical areas for interest and the tool delivers information that meet the conditions set by the analyst. | 01/01/2025 | a) Purchased from a vendor | First Alert/Dataminr | No | Analysts set up geographic boundaries for reporting alerts and make selections of topical areas for interest and the tool delivers information that meet the conditions set by the analyst. | Open-source data from the web, and social media, in addition to geographical location data. | No | No | ||||||||||||||
| Department Of Homeland Security | TSA | DHS-2522 | idiCORE Subscriptions | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This allows agents and analysts to quickly determine interrelationships and connections, facilitating faster and more accurate investigations. | Significantly enhances efficiency by saving analysts time in forming relative and associate interrelationships. It eliminates much of the guesswork, enabling analysts to focus on critical decisions and actions. | Analysts set up geographic boundaries for reporting alerts and make selections of topical areas for interest and the tool delivers information that meet the conditions set by the analyst. | 17/07/2025 | a) Purchased from a vendor | idiCORE | No | Analysts set up geographic boundaries for reporting alerts and make selections of topical areas for interest and the tool delivers information that meet the conditions set by the analyst. | idiCORE uses public records data, including government data, property records, business filings, and public social media information, in addition to proprietary/licensed data, such as specialized databases for insurance claims, law enforcement intel, and consumer data to train its models and perform its data-linking functions. | No | No | ||||||||||||||
| Department Of Homeland Security | TSA | DHS-2604 | AskTSA | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to address several challenges within the AskTSA customer service process. These include reducing the time it takes for agents to respond to inquiries, improving the accuracy and consistency of responses, and streamlining the categorization of incoming inquiries. Additionally, the AI would help identify areas for improvement in the virtual assistant’s performance and provide actionable insights to enhance its effectiveness. By automating repetitive tasks like categorizing inquiries, the AI would allow human agents to focus on more complex issues, ultimately improving efficiency and customer satisfaction. | The AI’s intended purpose is to enhance the efficiency, accuracy, and overall effectiveness of the AskTSA customer service process. It would assist human agents by automating routine tasks such as categorizing inquiries and summarizing customer concerns, enabling faster and more consistent communication with the public. Additionally, the AI would analyze interactions with the virtual assistant to identify areas for improvement and recommend adjustments to ensure it provides accurate and helpful responses. By streamlining workflows and providing actionable insights, the AI would support TSA’s goal of delivering high-quality, timely, and reliable customer service. | Summarized Inquiries: Condensed explanations of why a customer is reaching out; Recommended Responses: Suggested replies tailored to the summarized inquiries; Categorized Inquiries: Labels or classifications of inquiries based on their content to streamline workflow; Performance Reports: Analytical insights on the virtual assistant’s interactions, highlighting areas for improvement. | 16/12/2024 | a) Purchased from a vendor | Sprinklr | No | Summarized Inquiries: Condensed explanations of why a customer is reaching out; Recommended Responses: Suggested replies tailored to the summarized inquiries; Categorized Inquiries: Labels or classifications of inquiries based on their content to streamline workflow; Performance Reports: Analytical insights on the virtual assistant’s interactions, highlighting areas for improvement. | Supervised Learning :Decision trees are trained on labeled data (input and desired output) to learn patterns and make predictions on new, unseen data. | Yes | No | ||||||||||||||
| Department Of Homeland Security | TSA | DHS-2605 | Lexis Nexis | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | c) Not high-impact | Not high-impact | Generative AI | Identity verification for passenger screening and bolster transportation security. | Lexis Nexis assists TSA Analysts in identity verification for passenger screening and bolster transportation security. | The AI outputs are in the form of a person, vehicular, or dwelling report that is custom based on analysts inputs and selections. These reports are used by TSA to assist in the verification of traveler identities detecting fraudulent documents and ensuring compliance with security protocols. | 23/09/2024 | a) Purchased from a vendor | REX DBA Lexis Nexis | No | The AI outputs are in the form of a person, vehicular, or dwelling report that is custom based on analysts inputs and selections. These reports are used by TSA to assist in the verification of traveler identities detecting fraudulent documents and ensuring compliance with security protocols. | AI models are trained using a combination of open web content and proprietary LexisNexis content to ensure high-quality, relevant outputs. Evaluations occur through regular internal and external audits, customer feedback and reviews, and incident response. | No | No | ||||||||||||||
| Department Of Homeland Security | TSA | DHS-2609 | SITE Group Subscription Services | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This tool assists TSA with identifying known and unknown threats to the U.S. and international aviation and surface transportation systems. | AI-integration within Site group enhances its capabilities in monitoring and analyzing extremist activities online. | The AI outputs are in the form of curated reports based on human verified information within the topics of jihadist threat, domestic violence, extremism, critical infrastructure and technology, and terrorism. AI outputs within SITE provide actionable insights. Based on these outputs, follow-on actions might include flagging potential security threats for further analysis. | 20/09/2025 | a) Purchased from a vendor | SITE | No | The AI outputs are in the form of curated reports based on human verified information within the topics of jihadist threat, domestic violence, extremism, critical infrastructure and technology, and terrorism. AI outputs within SITE provide actionable insights. Based on these outputs, follow-on actions might include flagging potential security threats for further analysis. | SITE is trained on a large, human-verified dataset of archived Publicly Available Information (PAI) collected from internet-based platforms. This data includes messenger applications, social media venues, and websites. All data contained within has been human verified by SITE expert analysts, to avoid incomplete or inaccurate results from data collection methods like webscraping. No agency owned data is used to conduct training or evaluation of this product. | No | No | ||||||||||||||
| Department Of Homeland Security | USCG | DHS-178 | Adaptive Risk Model for Inspected Small Passenger Vessels | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A lack of a comprehensive data-driven tool for informing marine inspection policy has left policymakers to make decisions based on qualitative and anecdotal information, resulting in less-than-optimal allocation of limited marine inspection resources. | The Small Passenger Vessels Safety Task Force uses machine learning and expert input to build a flexible analysis tool that identifies the main causes of marine casualties and calculates a risk score for each vessel in the largest segment of the U.S.-inspected fleet. By using a logistic regression–based model with basic machine learning, this effort improves how inspectors are allocated, sharpens the focus on higher-risk vessels, and strengthens oversight to improve passenger safety. | Numerical score that compares vessels predicted safety risk relative to each other. | 01/01/2021 | b) Developed in-house | No | Numerical score that compares vessels predicted safety risk relative to each other. | Commercial vessel profiles including: engineering, life saving, propulsion, fire protection, manning, operating routes, plan review, and USCG inspection activity details. | No | Yes | |||||||||||||||
| Department Of Homeland Security | USCIS | DHS-16 | ELIS Evidence Classifier Service | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The adjudicators and contractors spends too much time sifting through digital evidence documents for relevant information. | To enable end users to navigate directly to the page(s) containing evidence documents of interest instead of sifting through large PDF documents. Evidence tagging intends to accelerate case processing by identifying specific types of documents (e.g., I-589, passport photo spread, marriage certificate) and applying a metadata tag to that document object in ELIS. This way, when a user opens a case with potentially hundreds of pages of evidence documents, rather than scrolling through them one at a time to find a specific document of interest, they have clickable "bookmarks" in the UI generated from these tags that will jump directly to the corresponding page. | Tagged evidence. The system inputs an image (scanned document from Lockbox) and outputs either a specific label, such as "Border Crossing Card - Front," or no label if that document is not recognized as one of the classes. | 01/09/2020 | c) Developed with both contracting and in-house resources | SAIC and DV United | Yes | Tagged evidence. The system inputs an image (scanned document from Lockbox) and outputs either a specific label, such as "Border Crossing Card - Front," or no label if that document is not recognized as one of the classes. | The system consists of a single vision-based object recognition model, and many text-based binary classifiers. The text models were trained and evaluated on separate class-specific sets of production data sampled from evidence documents, and each data point is the linearized OCR text obtained from a single scanned page image and AWS Textract. These training and testing sets are then annotated by data scientists on our team. | Yes | https://www.dhs.gov/publication/dhsuscispia-056-uscis-electronic-immigration-system-uscis-elis | Yes | https://www.dhs.gov/publication/dhsuscispia-056-uscis-electronic-immigration-system-uscis-elis | ||||||||||||
| Department Of Homeland Security | USCIS | DHS-2385 | Intelligent Document Processing (IDP) for I-539 Form Digitization | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Before the use case, all pages of a I-539 application were scanned and stored as a single document in the content management system, delaying adjudication and not meeting National Archives and Records Administration (NARA) standards. The tool uses a learning model to identify, classify, and separate individual documents into their component parts for storage. | IDP for the I-539 makes use of an AI-enhanced tool to identify, categorize, and create separate images for each document type submitted as part of the 539 benefit application. Prior to implementation of this use case, all pages of a 539 application were scanned and stored as a single document in the content management system. The benefit is reduced case processing time for adjudicators by identifying and classifying supporting documents for ease of use. An additional benefit is to bring digital images into compliance with NARA standards. | Input - one digital file comprised of all pages of a 539 benefit application. Output - multiple digital files comprised of the individual documents submitted as part of the 539 benefit application. These will include the 539 form, any other USCIS forms, and image files of other supporting documents such as Passports, Driver's license, Marriage Certificate, Bank Statement, etc. All pages of the original digital file are accounted for and stored. Any pages not identified by the tool are referred to a human for document type resolution. | 04/11/2024 | c) Developed with both contracting and in-house resources | CGI Federal under the Records Management Support Services (RMSS); Hyperscience - IDP software OEM | Yes | Input - one digital file comprised of all pages of a 539 benefit application. Output - multiple digital files comprised of the individual documents submitted as part of the 539 benefit application. These will include the 539 form, any other USCIS forms, and image files of other supporting documents such as Passports, Driver's license, Marriage Certificate, Bank Statement, etc. All pages of the original digital file are accounted for and stored. Any pages not identified by the tool are referred to a human for document type resolution. | CMS/STACKS test data is used to train the model. This data is comprised of digital images of blank USCIS forms and common supporting forms (Marriage License, Driver's License, Passport, ect.) generated using fake information such as Mickey Mouse and Donald Duck in place of PII. | Yes | https://www.dhs.gov/publication/dhsuscispia-079-content-management-services-cms | No | https://www.dhs.gov/publication/dhsuscispia-079-content-management-services-cms | ||||||||||||
| Department Of Homeland Security | USCIS | DHS-2598 | PDF Intake (PDFI) for myUSCIS | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Government Benefits Processing | Deployed | c) Not high-impact | Not high-impact | Generative AI | Scanned PDFs submitted through MyUSCIS must be validated against form-specific business rules related to both the overall document and the contents of specific fields. Constructed a service that can process a scanned input document and return all information pertinent to these validation rules in a consistent structure (JSON) to a user-facing ELIS microservice. The GenAI powered library utilizes Amazon Bedrock – Anthropic Claude 3.7 Sonnet V1 Foundation Model to extract data from PDF forms. The service provides ability to submit forms online through MyUSCIS UI to Lockbox instead of via mail. | Develop a service that can extract relevant fields from a scanned PDF submitted through MyUSCIS and build a JSON as an output to the ELIS microservice. The new service will utilize AWS Bedrock provided foundation model. It is an engineering solution that minimizes development time to add new forms or form revisions with high accuracy. | The output of this AI system is structured information about the validation rules applied to the input form as well as the extracted contents of filled fields on the form, presented in a JSON format readable by both humans and machines that is consistent with existing ELIS databases. | 23/07/2025 | c) Developed with both contracting and in-house resources | Analytica and DV United | Yes | The output of this AI system is structured information about the validation rules applied to the input form as well as the extracted contents of filled fields on the form, presented in a JSON format readable by both humans and machines that is consistent with existing ELIS databases. | During development, this system is evaluated using both manually created synthetic data (i.e. filled PDF forms with annotated contents) and production data (scans of forms submitted previously through Lockbox as scanned TIF files). The underlying pretrained foundation model supplied by Bedrock service is used as-is with no further training or fine-tuning. | Yes | https://www.dhs.gov/publication/dhsuscispia-056-uscis-electronic-immigration-system-uscis-elis | Yes | https://www.dhs.gov/publication/dhsuscispia-056-uscis-electronic-immigration-system-uscis-elis | ||||||||||||
| Department Of Homeland Security | USCIS | DHS-366 | AI Interview Simulator for Officer Training | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | By simulating realistic applicant responses to officer questions, the AI Interview Simulator enables the RAIO trainees to refine their interview techniques without requiring any additional resources from experienced officer trainers or peers. | The AI Interview Simulator mimics live interviews to provide an analysis of the type of responses elicited in a mock interview. This training platform accelerates the competency of the interviews by providing a format for the trainees to practice soliciting testimony from applicants through a chat-based user interface. | The AI Interview Simulator generates human-like conversation in a text format, specially trained and tuned for RAIO Officer training. | 08/09/2025 | c) Developed with both contracting and in-house resources | Steampunk, Customer Value Partners LLC (CVP) and Alpha Omega Integration (AOI) | Yes | The AI Interview Simulator generates human-like conversation in a text format, specially trained and tuned for RAIO Officer training. | AI Interview Simulator uses Proprietary/Private but Not Sensitive data, including training materials, internal guidance documents, and policies that are proprietary to USCIS. | No | https://www.dhs.gov/publication/dhsuscispia-027b-refugees-asylum-and-parole-system-and-asylum-pre-screening-system | Yes | https://www.dhs.gov/publication/dhsuscispia-027b-refugees-asylum-and-parole-system-and-asylum-pre-screening-system | ||||||||||||
| Department Of Homeland Security | USSS | DHS-2626 | License Plate Reader | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Computer Vision | To identify license plate information quickly, efficiently, and accurately from low-quality imagery, advanced image processing and machine learning techniques are typically employed. Need to be able to enhance the clarity of the image, correct distortions, and extract relevant details, even in challenging conditions such as low resolution, poor lighting, motion blur, or obstructions. | The purpose of identifying license plate information from low-quality imagery is to enable accurate and efficient vehicle identification for applications such as law enforcement, traffic monitoring, parking management, border security, and access control, ensuring operational efficiency and enhanced security. | Optical Character Recognition (OCR) technology is utilized to detect and interpret the alphanumeric characters on the license plate. The extracted license plate information serves as a decision support tool, aiding investigators by highlighting possible characters and combinations. This allows investigators to efficiently generate and verify leads while complementing their independent analysis. | 07/01/2025 | a) Purchased from a vendor | Amped | No | Optical Character Recognition (OCR) technology is utilized to detect and interpret the alphanumeric characters on the license plate. The extracted license plate information serves as a decision support tool, aiding investigators by highlighting possible characters and combinations. This allows investigators to efficiently generate and verify leads while complementing their independent analysis. | The vendor, trained a dedicated neural network with millions of synthetically generated and distorted license plates for several countries/states. No license plate images were scraped from the web. Experimental validation was performed by the vendor on Italian license plates. Neural Network for Denoising and Reading Degraded License Plates. | No | Yes | ||||||||||||||
| Department Of Homeland Security | ICE | DHS-172 | Video Analysis Tool (VAT) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-197 | Mobile Language Translation Services | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2409 | ICE Mobile Check-in Application | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-414 | I-765 - USCIS Face Capture Mobile App | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-123 | Voice Analytics for Investigative Data | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | b) Presumed high-impact but determined not high-impact | Not high-impact | The definition states that the "output serves as a principal basis for a decision or action concerning a specific individual or entity..." However, the output of the AI is never the only basis for a decision, and no rights impacting action is taken unless a human is in the loop making a further determatination | ||||||||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-49 | Mobile Device Analytics for Investigative Data | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | b) Presumed high-impact but determined not high-impact | Not high-impact | The definition states that the "output serves as a principal basis for a decision or action concerning a specific individual or entity..." However, the output of the AI is never the only basis for a decision, and no rights impacting action is taken unless a human is in the loop making a further determatination | ||||||||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-57 | Identity Match Option (IMO) Tool for Record Compilation | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | b) Presumed high-impact but determined not high-impact | Not high-impact | The use case compiles records from across a variety of USCIS systems to provide a comprehensive history of a person’s interaction with USCIS. The AI output can be visualized through a report or dashboard to assist with case review ensuring access to useful and accurate records. (SEE DHS CAIO SUPER MEMO FY24) ------ The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. The use case compiles records from across a variety of USCIS systems to provide a comprehensive history of a person’s interaction with USCIS. The output of this can be visualized through a report or dashboard to assist with case review ensuring access to helpful and accurate records. Adjudicators review the outputs of this use case, alongside other information and insights, to process a case and make a final determination. The adjudication process can be conducted without this tool, however, doing so would significantly increase the time and effort required to process immigration requests. | ||||||||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-165 | Automated Data Annotation | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2364 | Anomaly Detection Homogenous Cargo | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2367 | Computer Vision for Aerial Detection of Land and Open Water Items of Interest | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2369 | AI for Software Delivery | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2371 | Optical Counter - UAS Detection | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2377 | Underwater ROV | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2378 | Wellness and Physical Fitness Application | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-P2 | AI for Autonomous Situational Awareness | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CISA | DHS-2403 | Security Operation Center (SOC) Network Anomaly Detection | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CISA | DHS-5 | Confidence Scoring for Cybersecurity Threat Indicators | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CWMD | DHS-406 | Report Analysis and Archive System (RAAS) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | DHS | DHS-368 | Commercial Generative AI for Text Generation (AI Chatbot) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | DHS | DHS-369 | Commercial Generative AI for Image Generation | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | DHS | DHS-373 | Commercial Generative AI for Code Generation | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2440 | Recovery and Resilience Resource (RRR) Portal | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2442 | Digital Processing Procedure Manual (D-PPM) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-248 | Incident Management Workforce Deployment Model (depmod) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-251 | Individual Assistance (IA) & Public Assistance (PA) Projections | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-254 | Planning Assistant for Resilient Communities (PARC) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-346 | Geospatial Damage Assessments | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-53 | Identification Card and Travel Document Code Detection | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-9 | Machine Translation (Previously Language Translator) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | OHS | DHS-2420 | MiX MedINT (Medical Intelligence Dashboard and Canvas) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2395 | Conversation Training and Feedback Simulator | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-17 | Case Processing Improvements in FDNS-DS NexGen | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-2544 | OCR for Scanning and Cataloging Documentation [Captiva Open Text] | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0001 | Adobe Suite Applications | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in Adobe Suite products, is to support the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) in enhancing the quality and efficiency of multimedia content handling within their operations. The out-of-the-box artificial intelligence features included in these Adobe applications assist in tasks like image enhancement, optical character recognition (OCR), content-aware editing, and document formatting. These AI capabilities enable ATF to prepare high-quality documents and multimedia files for various purposes By automating and improving these tasks, the AI tools help reduce manual workload, minimize errors, and increase operational efficiency. Overall, the use of AI in Adobe Suite products is designed to improve the effectiveness, accuracy, and professionalism of ATF's documentation and multimedia processing. | The expected benefits of utilizing the AI features in Adobe Suite Products are broad and extend beyond FOIA-related activities. The AI capabilities in applications like Photoshop and Acrobat enhance productivity by automating routine tasks such as image editing, optical character recognition (OCR), and content-aware fill. This automation allows staff to process documents and multimedia content more quickly and accurately, leading to time savings and reduced operational costs. The improved efficiency helps in various functions, from preparing official documents to creating high-quality visual materials for communication purposes. Additionally, the AI tools contribute to better quality outputs, which can enhance public engagement and trust. Overall, the AI features in Adobe Suite Products support increased productivity, cost savings, and higher-quality work across a range of organizational activities. | The AI features within Adobe Suite products output automated enhancements and suggestions to improve the efficiency and quality of multimedia content handling. In applications like Photoshop and Acrobat, the AI provides functionalities such as image enhancement, optical character recognition (OCR), content-aware editing, and automated formatting. These outputs assist users by automating routine tasks and enhancing the quality of the final product. The AI serves as a tool to support staff in their work, but all actions are initiated and finalized by human users, ensuring that control remains with the individual operator. | a) Purchased from a vendor | Adobe | No | The AI features within Adobe Suite products output automated enhancements and suggestions to improve the efficiency and quality of multimedia content handling. In applications like Photoshop and Acrobat, the AI provides functionalities such as image enhancement, optical character recognition (OCR), content-aware editing, and automated formatting. These outputs assist users by automating routine tasks and enhancing the quality of the final product. The AI serves as a tool to support staff in their work, but all actions are initiated and finalized by human users, ensuring that control remains with the individual operator. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0002 | Airlines Travel Intelligence Program | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The purpose of the AI use case in the Airlines Reporting Corporation Travel Intelligence Program is to examine travel data and highlight atypical routes or passenger movements more swiftly. Leveraging AI for pattern recognition and predictive insights reduces manual data inspection, speeds the identification of significant travel anomalies, and enables more prompt, well-informed decision-making. Overall, it refines resource deployment and fosters more accurate, efficient management of travel-related intelligence. | The expected benefit is reduced effort identifying unusual travel scenarios, focusing on significant itineraries/individuals. | The AI features generate alerts, highlight uncommon travel paths/profiles, suggesting beneficial attention areas. | a) Purchased from a vendor | Airlines Reporting Corporation | No | The AI features generate alerts, highlight uncommon travel paths/profiles, suggesting beneficial attention areas. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0003 | Unmanned Aerial Systems (UAS) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Airship is an enterprise video management platform used to manage, secure, store, and analyze video surveillance obtained through criminal investigations. The airship platform includes AI-based capabilities for optical character recognition (OCR) for car license plates, airplane tail numbers, etc. and object detection with customizable near-real time alerts. | AI features help to support agent monitoring of surveillance video streams to ensure rapid notifications when predefined events occur that are pertinent to a criminal investigation. | Bounding boxes for video frames and metadata regarding the detected object characteristics, alphanumeric digitized text from OCR | a) Purchased from a vendor | Airship AI Holdings Inc. | Yes | Bounding boxes for video frames and metadata regarding the detected object characteristics, alphanumeric digitized text from OCR | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0004 | Alation Data Catalog | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | The AI within Alation’s Data Catalog is designed to make data easy to find, understand, and use. It leverages machine learning and natural language processing (NLP) to automatically organize and tag data, creating a “map” of available information from various sources within an organization. The goal is that anyone looking for specific data can use simple, plain-language search terms, and the AI will help them locate the most relevant information quickly. | Enable users to quickly find relevant data across large, complex datasets, making information more accessible for decision-making. Additionally, Alation’s AI automates data organization and governance, helping to keep data accurate, up-to-date, and secure. It also supports compliance and builds trust in data, empowering teams to make reliable, data-driven decisions. | The Alation Data Catalog AI system primarily produces outputs that are recommendations and predictive insights to enhance data discovery, governance, and usability. AI outputs guide users toward more efficient and informed data management. | a) Purchased from a vendor | Leidos | Yes | The Alation Data Catalog AI system primarily produces outputs that are recommendations and predictive insights to enhance data discovery, governance, and usability. AI outputs guide users toward more efficient and informed data management. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0005 | Axon FUSUS (via HIDTA partners) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | This capability is not provided or managed by ATF. It provides ATF staff access to Axon FUSUS systems which are provided and managed by High Intensity Drug Trafficking Area (HIDTA) partner agencies. FUSUS integrates video and other data from public safety systems to support law enforcement investigations and increase situational awareness for real-time operations. This includes AI-enabled analysis of participating video streams, with configurable real-time notifications to law enforcement. | Increased situational awareness for real-time law enforcement operations. | Notifications to law enforcement based upon preconfigured alerts from AI-enabled analysis of data streams from participating security devices | a) Purchased from a vendor | Axon | No | Notifications to law enforcement based upon preconfigured alerts from AI-enabled analysis of data streams from participating security devices | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0006 | Azure Data Factory | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | ATF recently enabled Azure Data Factory subscription in Azure, which included smart and intelligent features like anomaly detection, data quality analysis, data completeness, and prediction. While we do not use these features directly, they are embedded within the tools. | The embedded data analytic and data quality capabilities will increase the efficiency and effectiveness with which ATF is able to locate and analyze our data, and the quality and reliability of the resulting data. | Data extraction transformation/load pipelines used to integrate data from disparate datasets | c) Developed with both contracting and in-house resources | Microsoft | Yes | Data extraction transformation/load pipelines used to integrate data from disparate datasets | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0007 | Azure Zen 2 Storage | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | ATF recently enabled Storage subscription in Azure, which included smart features automatic data indexing. While we do not use these features directly, they are embedded within the tools, enabling automatic data indexing based on the data fed into the system. | Automatic data indexing increases the efficiency and effectiveness with which ATF is able to locate and analyze our data. | A data index which aides in the retrieval and analysis of ATF data. | c) Developed with both contracting and in-house resources | Microsoft | Yes | A data index which aides in the retrieval and analysis of ATF data. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0008 | Bloomberg Government | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in Bloomberg Government (BGOV) is to enhance policy and regulatory analysis by automating data aggregation, pattern detection, and trend forecasting. Using AI, the system refines vast datasets into actionable insights, helping users quickly understand emerging issues and legislative shifts. | More informed strategic planning, better resource allocation, and improved overall comprehension of complex policy environments. | The AI features compile briefs, detect regulatory patterns, and compare data points into a coherent narrative. | a) Purchased from a vendor | Bloomberg Industry Group | No | The AI features compile briefs, detect regulatory patterns, and compare data points into a coherent narrative. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0009 | CargoNet | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in CargoNet is to detect and interpret patterns within logistics and theft incident data automating the process of identifying unusual activities or recurring risks. | The expected benefit is focusing efforts on deviating areas/goods/patterns to improve effectiveness of subsequent steps. | The AI features produce alerts, reveal clusters, and connect events to show underlying trends. | a) Purchased from a vendor | Verisk Analytics | No | The AI features produce alerts, reveal clusters, and connect events to show underlying trends. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0010 | Cell Hawk (via HIDTA partners) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | This capability is not provided or managed by ATF. It provides ATF staff read-only access to Cell Hawk data which is provided and managed by High Intensity Drug Trafficking Area (HIDTA) partner agencies. | Link and trend analysis results provided by this system provide law enforcement personnel with increased insights into subjects' activities in the context of criminal investigations. | Link and trend analysis diagrams showing entities involved in criminal investigations and known cellphone-based communications between them | a) Purchased from a vendor | Leads Online | No | Link and trend analysis diagrams showing entities involved in criminal investigations and known cellphone-based communications between them | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0011 | Coinbase | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0012 | Digital.ai | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Digital.ai is used as ATF's enterprise tool for managing agile software development projects. It is used to capture user stories (requirements) and manage the processes of prioritization, implementation, testing, and close-out of the user stories. None of the AI features available within digital.ai are currently in use, but the use case is being reported because they are available within the product. AI features involve analysis of project status, supporting automated test generation, and automating software releases. ATF uses other non-AI products for these purposes. | ATF is not using any of the available AI features. | ATF has not evaluated the AI features in detail since other non-AI products are currently being used to serve the purposes for which digital.ai uses AI. | ATF has not evaluated the AI features in detail since other non-AI products are currently being used to serve the purposes for which digital.ai uses AI. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0014 | Dun & Bradstreet Business Establishments Data | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in Dun & Bradstreet Business Establishments Data is to analyze business information, detect irregularities, and assess potential risks more efficiently. AI-driven data integration, anomaly detection, and trend analysis automate what would otherwise be intensive manual evaluations. | The expected benefit is more targeted attention on atypical organizations, improving efficiency and reducing randomness. | The AI features generate risk indicators, show relationships, help focus examination resources. | a) Purchased from a vendor | Dun and Bradstreet Holdings Inc. | No | The AI features generate risk indicators, show relationships, help focus examination resources. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0015 | Axon Evidence.com | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | ATF's use of Axon body-worn cameras and the associated evidence.com service are integral to ensuring ATF's compliance with DOJ policy documented in DOJ OIG, Inspector General Manual, Volume III, Chapter 236, Body Worn Camera Program. Body cam videos are transferred to the Axon Evidence.com service, which includes AI-based features for performing recognition of heads in videos for the purpose of redaction. ATF has performed limited testing of these capabilities, but is not operationally using them. | Evidence.com provides AI-based features that identify the presence of heads in videos and can perform automated redaction. This would increase the efficiency of redacting video evidence for release. However, ATF is not operationally using them. | Evidence.com AI capabilities will output bounding boxes around heads that are identified in videos, for input to redaction processes. These AI functions are not being operationally used. | a) Purchased from a vendor | Axon | Yes | Evidence.com AI capabilities will output bounding boxes around heads that are identified in videos, for input to redaction processes. These AI functions are not being operationally used. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0016 | Federal Docket Management System | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | This system allows agencies to receive comments that are submitted electronically in response to rulemaking initiatives. The system has features that categorize comments based on text analytics, allows agencies to make those comments public facing or redact them, and allows agencies to download the comments. | This product permits agencies to review the electronic comments received in response to rulemaking. The features in FDMS allow agencies to bulk post up to 1,000 comments allowing the public to see their comments faster. | It is the place where agencies receive comments that are electronically submitted in response to a rulemaking. It allows agencies to make those comments public facing. | a) Purchased from a vendor | General Services Administration (GSA) | No | It is the place where agencies receive comments that are electronically submitted in response to a rulemaking. It allows agencies to make those comments public facing. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0017 | FINDER | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The purpose of the AI use case in FINDER is to sift through violent crime and firearms trafficking data more efficiently. By employing machine learning to highlight significant patterns, suspects, or trends, it automates tasks that would normally require manual, resource-intensive reviews. This leads to quicker identification of essential insights, improves operational effectiveness, and ensures that effort is directed at the most pertinent leads, enhancing both speed and precision in investigative processes. | The expected benefit is more strategic attention to areas that may reduce negative outcomes or enhance preparedness. | The AI features map trends, highlight recurrent factors, and propose focus points, reducing manual data reviews. | a) Purchased from a vendor | FINDER | No | The AI features map trends, highlight recurrent factors, and propose focus points, reducing manual data reviews. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0018 | First Two | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The purpose of the AI use case in First Two is to link individuals to specific locations and visualize spatial relationships that might be important. By automating map-based data correlation, pattern discovery, and geospatial analysis, the system reduces manual data plotting and interpretation. This results in quicker recognition of significant activity areas, improved allocation of resources to critical locations, and overall enhancement of operational responsiveness and strategic planning. | The expected benefit is more efficient focus on frequently visited places/people, enhancing resource allocation where location matters. | The AI features visualize activity geographically, spotlight frequent places, and suggest beneficial attention areas. | a) Purchased from a vendor | FirstTwo | No | The AI features visualize activity geographically, spotlight frequent places, and suggest beneficial attention areas. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0019 | LexisNexis Accurint | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in LexisNexis Accurint is to merge and analyze an array of public and proprietary records to build a clearer picture of individuals and entities. By automating data integration, filtering, and pattern recognition, it expedites the identification of relevant subjects and connections. This advanced approach reduces manual searching, enhances accuracy in linking data points, and strategically directs focus toward high-value leads, improving overall investigative efficiency and decision-making. | The expected benefit is accelerated ID of important figures/relationships, skipping disjointed dataset searches. | The AI features compile overviews, note aliases, highlight patterns, giving clear reference points for exploration. | a) Purchased from a vendor | LexisNexis | No | The AI features compile overviews, note aliases, highlight patterns, giving clear reference points for exploration. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0020 | LexisNexis Babel Street | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0021 | Mark43 Public Safety Records | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in Mark43 Public Safety Records is to organize, categorize, and correlate extensive law enforcement data from various sources. By employing AI-driven classification, entity resolution, and data linking, it reduces the manual workload needed to extract meaningful insights. This empowers users to identify key relationships and patterns more rapidly, increases data accuracy, and supports more strategic use of investigative resources, ultimately improving both timeliness and quality of public safety operations. | The expected benefit is better informed decisionmaking, allowing focus on data likely to yield meaningful insights. | The AI features categorize documents, flag repeated factors, and highlight patterns hidden without assistance. | a) Purchased from a vendor | Mark43 | No | The AI features categorize documents, flag repeated factors, and highlight patterns hidden without assistance. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0022 | National Insurance Crime Bureau ISO ClaimSearch | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in National Insurance Crime Bureau ISO ClaimSearch is to analyze insurance claim data and highlight suspicious patterns or anomalies more efficiently. By applying AI-driven pattern recognition, risk assessment, and anomaly detection, it streamlines what would be a tedious manual review process. This leads to faster fraud identification, more accurate targeting of problematic claims, and better resource utilization, ultimately strengthening investigative outcomes. | The expected benefit is more effective resource use, targeting claims that differ from typical patterns rather than all equally. | The AI features produce lists of flagged claims, show patterns, and suggest where deeper validation might help. | a) Purchased from a vendor | ISO / Verisk Analytics / National Insurance Crime Bureau | No | The AI features produce lists of flagged claims, show patterns, and suggest where deeper validation might help. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0024 | Operational Planning Analytics Risk Management Solution (OPARMS) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Computer Vision | The Operational Planning Analytics Risk Management Solution (OPARMS) creates op plans within ATF's case management system, eliminating the previous system for generating op plans. ATF anticipates that OPARMS will also provide dashboard overviews of the op plans and risk data for all op plans. | Improve operational planning, data collection, and risk mitigation. Reduce time to fill out op plans. Increase accuracy of data entered into ops plans by populating info from other ATF systems. Reduce approval times by routing and tracking op plans through OPARMS. Increase collection of operations planning and after action reports. Collect and store data to begin developing the analytics for calculating and identifying risk in operations allowing team members to better mitigate those risks. | 1. Risk rating of proposed operations 2. Recommendations for resource allocation based on risk ratings | 1. Risk rating of proposed operations 2. Recommendations for resource allocation based on risk ratings | |||||||||||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0025 | Palantir (via access to external state LE system) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | This use case is being reported because Palantir has AI-based capabilities. However, ATF only accesses Palantir through task force partnerships with the a state law enforcement (LE) partner. The state LE agency uses Palantir as their case management system, and ATF's use is limited to searching for information which the state LE agency chooses to share with law enforcement partners. ATF has no involvement with or any knowledge of use of AI features by the state agency which runs the system. | ATF use of the state LE partner Palantir system is limited to searching for information which the state agency chooses to share with law enforcement partners. ATF has no involvement with, any knowledge of, or any expected benefits from use of AI by the state agency which runs the system. | Unknown. ATF only uses the system to perform standard search functions of information which state agency chooses to share with law enforcement partners. | a) Purchased from a vendor | Palantir | No | Unknown. ATF only uses the system to perform standard search functions of information which state agency chooses to share with law enforcement partners. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0028 | ShotSpotter (via access to external state/local systems) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Rapidly locate gunshots and activity of potential investigative interest to alert relevant law enforcement agencies. | Rapid detection of gunshots, which can help to decrease the time to respond to a violent crime. | Outputs sensor reports of gunshots and activity of investigative interest. | a) Purchased from a vendor | SoundThinking | No | Outputs sensor reports of gunshots and activity of investigative interest. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0029 | Southwest Border Transaction Record Analysis Center (SWBTRAC) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in Southwest Border Transaction Record Analysis Center (SWBTRAC) is to monitor cross-border financial transactions and identify unusual activities more efficiently. By integrating AI-driven anomaly detection, trend analysis, and risk scoring, it simplifies manual review and directs attention to truly irregular transfers. This approach ensures that investigative focus is applied judiciously, improves accuracy, and enhances strategic use of resources in addressing cross-border financial concerns. | The expected benefit is faster recognition of outlier scenarios, focusing on meaningful transactions rather than all equally. | The AI features provide alerts, highlight unusual transfers, and explain why certain activities warrant closer observation. | a) Purchased from a vendor | Western Union | No | The AI features provide alerts, highlight unusual transfers, and explain why certain activities warrant closer observation. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0030 | Spokeo | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in Spokeo is to aggregate and enrich publicly accessible personal data to create comprehensive profiles. AI-based entity resolution, data linkage, and pattern recognition automate the task of piecing together scattered details. This significantly reduces manual workload, sharpens the accuracy of identifying individuals of interest, and ensures that investigative energies are invested where they can yield the most valuable insights. | The expected benefit is quicker access to comprehensive background details, easily identifying individuals of interest. | The AI features assemble contact info, historical records, and related data into a cohesive presentation. | a) Purchased from a vendor | Spokeo | No | The AI features assemble contact info, historical records, and related data into a cohesive presentation. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0031 | Thomson Reuters CLEAR | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | The purpose of the AI use case in Thomson Reuters Clear is to consolidate and analyze diverse records, creating coherent profiles of individuals and entities. By applying AI-driven data integration, entity resolution, and intelligent filtering, it automates traditionally manual tasks. This leads to faster identification of subjects of interest, improved accuracy in linking related data points, and a more strategic use of time and resources. Ultimately, AI integration enhances investigative effectiveness, reduces errors, and supports more targeted research. | The expected benefit is reduced manual effort, enabling quicker discovery of relevant profiles/relationships. | The AI features organize profiles, indicate links, and suggest deeper review areas, streamlining fragmented processes. | a) Purchased from a vendor | Thomson Reuters | No | The AI features organize profiles, indicate links, and suggest deeper review areas, streamlining fragmented processes. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0032 | TransUnion TLOxp Online Investigative Services | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in TransUnion TLOxp Online Investigative Services is to gather and analyze diverse personal, financial, and asset-related data more effectively. By implementing AI for entity resolution, risk scoring, and pattern detection, it replaces manual data handling with automated insights. This accelerates the discovery of meaningful leads, reduces errors, and ensures attention is concentrated on cases truly warranting further review, improving both accuracy and resource allocation. | The expected benefit is better time/effort use, focusing on subjects/data points that seem more meaningful. | The AI features highlight key personal details, indicate anomalies, and present data in a structured format for deeper inquiry. | a) Purchased from a vendor | TransUnion | No | The AI features highlight key personal details, indicate anomalies, and present data in a structured format for deeper inquiry. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0033 | Veritone Redact | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | The purpose of the AI use case in Veritone Redact is to support ATF in processing Freedom of Information Act (FOIA) requests by automating the redaction of sensitive information within audio and video files. | The expected benefits of the AI use case in Veritone Redact involve enhancing the efficiency and accuracy of redacting sensitive information within audio and video files. By automating the identification and redaction of personally identifiable information and other confidential content, the AI reduces the time and effort required from staff to process multimedia materials. This leads to faster turnaround times for releasing information, thereby reducing customer wait times for FOIA requests. The AI-driven redaction process also helps ensure compliance with privacy laws and regulations, minimizing the risk of inadvertently disclosing sensitive information. These efficiencies result in cost savings through reduced labor hours and improved resource allocation. Overall, the AI in Veritone Redact is expected to improve operational efficiency, enhance compliance, and support timely access to information for the public. | The AI system in Veritone Redact outputs recommendations and automated actions to support the redaction of sensitive information within audio and video files. It intelligently identifies personally identifiable information and other confidential content that may need to be redacted under legal exemptions. All suggested redactions are reviewed and approved by human staff to ensure compliance with legal standards. | a) Purchased from a vendor | aiWARE | Yes | The AI system in Veritone Redact outputs recommendations and automated actions to support the redaction of sensitive information within audio and video files. It intelligently identifies personally identifiable information and other confidential content that may need to be redacted under legal exemptions. All suggested redactions are reviewed and approved by human staff to ensure compliance with legal standards. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0034 | Whooster | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The purpose of the AI use case in Whooster is to integrate and analyze data from multiple sources to build comprehensive profiles on persons or entities of interest. AI-driven entity resolution, relationship mapping, and data enrichment help automate what would otherwise be manual, time-consuming cross-referencing. This improves the speed and accuracy of identifying relevant connections, ensures more targeted follow-up, and optimizes the allocation of investigative resources. | The expected benefit is more effective use of time, focusing on particularly relevant subjects. | The AI features compile profiles, highlight relationships, and present context, ensuring key info is easily accessible. | a) Purchased from a vendor | Whooster | No | The AI features compile profiles, highlight relationships, and present context, ensuring key info is easily accessible. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0035 | Commercial LPR (via HIDTA partners) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | This capability is not provided or managed by ATF. It provides ATF staff access to Flock license plate reader systems which are provided and managed by High Intensity Drug Trafficking Area (HIDTA) partner agencies. | License plate reader systems are used by law enforcement to assist with identification of vehicles associated with criminal investigations. ATF has no role in training or managing AI capabilities which are incidental to providing the service. | Alphanumeric characters and/or symbols associated with vehicle license plates | a) Purchased from a vendor | Flock Safety | No | Alphanumeric characters and/or symbols associated with vehicle license plates | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0036 | ELSAG/Leonardo (via HIDTA partners) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | This capability is not provided or managed by ATF. It provides ATF staff access to ELSAG license plate reader systems which are provided and managed by High Intensity Drug Trafficking Area (HIDTA) partner agencies. | License plate reader systems are used by law enforcement to assist with identification of vehicles associated with criminal investigations. ATF has no role in training or managing AI capabilities which are incidental to providing the service. | Alphanumeric characters and/or symbols associated with vehicle license plates | a) Purchased from a vendor | Leonardo US Cyber and Security Solutions LLC | No | Alphanumeric characters and/or symbols associated with vehicle license plates | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0038 | Thomson Reuters Vigilant Vehicle Manager | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The purpose of the AI use case in Thomson Reuters Vigilant Vehicle Manager is to intelligently organize, correlate, and analyze large volumes of license plate and vehicle sightings data. By automating the identification of recurring patterns and providing data-driven insights, it streamlines the workflow for identifying vehicles that may warrant attention. The AI capabilities reduce time-consuming manual reviews, improve accuracy in spotting significant trends, and help allocate investigative resources more effectively for timely and well-informed actions. | License plate reader systems are used by law enforcement to assist with identification of vehicles associated with criminal investigations. ATF has no role in training or managing AI capabilities which are incidental to providing the service. | The AI features produce overviews of vehicle activity, highlight recurring patterns, and point to areas for further review. | a) Purchased from a vendor | Motorola | No | The AI features produce overviews of vehicle activity, highlight recurring patterns, and point to areas for further review. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0039 | Veritone Illuminate | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Veritone Illuminate is a cloud-based application that provides machine translation and audio and video analysis transcription to aid human review in investigations. The product allows us to leverage artificial intelligence (AI) to systematically turn unstructured data (audio and video files) into structured data. The structured data can be easily searchable and provide more value to our cases. | Converts audio and video files to text searchable formats, and quickly translates multiple native languages into English-based text for more efficient review. | audio/video transcription text face detection (not used) object/scene detection results in images/videos text extraction from images speaker identification in audio pattern recognition results entity extraction | a) Purchased from a vendor | Veritone | Yes | audio/video transcription text face detection (not used) object/scene detection results in images/videos text extraction from images speaker identification in audio pattern recognition results entity extraction | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0040 | SAS (Forecasting, predictive analytics, data/statistical analysis) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Forecasting market conditions using various data sources for economic analyses. | Provides ATR with the ability to analyze, summarize, synthesize, and manage various data sources and data formats to support economic and statistical analysis of corporation and market conditions. ATR relies heavily on data analytical tools to support micro - and macro-level analysis. | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | ||||||||||||||||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0041 | R (Forecasting, predictive analytics, data/statistical analysis) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Forecasting market conditions using various data sources for economic analyses. | Provides ATR with the ability to analyze, summarize, synthesize, and manage various data sources and data formats to support economic and statistical analysis of corporation and market conditions. ATR relies heavily on data analytical tools to support micro - and macro-level analysis. | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | ||||||||||||||||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0042 | Stata (Forecasting, predictive analytics, data/statistical analysis) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Forecasting market conditions using various data sources for economic analyses. | Provides ATR with the ability to analyze, summarize, synthesize, and manage various data sources and data formats to support economic and statistical analysis of corporation and market conditions. ATR relies heavily on data analytical tools to support micro - and macro-level analysis. | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | ||||||||||||||||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0043 | Matlab (Forecasting, predictive analytics, data/statistical analysis) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Forecasting market conditions using various data sources for economic analyses. | Provides ATR with the ability to analyze, summarize, synthesize, and manage various data sources and data formats to support economic and statistical analysis of corporation and market conditions. ATR relies heavily on data analytical tools to support micro - and macro-level analysis. | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | ||||||||||||||||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0044 | Databricks (Forecasting, predictive analytics, data/statistical analysis) | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Client-side and SaaS products that contains fundamental modeling, AI and machine learning techniques for predictive modeling, natural language processing, computer vision and deep learning. Design and develop sophisticated economic models to analyze markets. | Provides ATR with the ability to analyze, summarize, synthesize, and manage various data sources and data formats to support economic and statistical analysis of corporation and market conditions. ATR relies heavily on data analytical tools to support micro - and macro-level analysis. | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | a) Purchased from a vendor | Databricks | Yes | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0045 | Salesforce | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | This initiative modernizes our current on-premise and matter management databases and storage systems by migrating to cloud-based platforms specifically architected for AI integration and enhanced system interoperability. Our existing applications and data are currently hosted on-premise systems that create fundamental barriers to AI implementation, limiting data accessibility, constraining scalability, and preventing the real-time data integration that modern AI tools require. | (1) Creates a replicable model for DOJ-wide adoption by establishing standardized cloud architecture and data integration frameworks that can be scaled across components. Demonstrates how breaking down legacy data silos enables coordinated, AI-driven insights and business intelligence capabilities that support department-wide resource allocation and investigative priorities. (2) Leverages JMD's established AWS Landing Zone to accelerate cloud adoption and builds upon existing data governance frameworks and system modernization efforts across DOJ components. The migration integrates with current cloud-based systems through the AWS Landing Zone infrastructure, enabling AI-powered business intelligence capabilities without duplicating technology investments. (3) Completing the migration to AI-native cloud infrastructure and deploying an AI-driven business intelligence dashboard that enables Division leadership to explore patterns and trends across the litigation portfolio. | Enabling Infrastructure: creates foundational capability for AI applications rather than direct AI outputs; enables other use cases to generate their respective predictions, recommendations, and automated actions. | a) Purchased from a vendor | No | Enabling Infrastructure: creates foundational capability for AI applications rather than direct AI outputs; enables other use cases to generate their respective predictions, recommendations, and automated actions. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | ||||||||||||||
| Department Of Justice | Department of Justice / COPS | DOJ-0046 | ChatBot | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The Chatbot responds to the questions, allowing staff to focus on other priorities. | The Intelligent Bot tool is implemented using Question and Answer (QnA) Maker and the Language Understanding (LUIS) services which is developed using the Microsoft Azure Software As A Service (SAAS) infrastructure in the Cloud on the COPS website. This will allow COPS to easily implement a brand new knowledge base of questions and answers feature on the site which will respond via a chat box based on a question entered by the customer. | Response | b) Developed in-house | No | Response | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / CRM | DOJ-0047 | BMC Helix ITSM | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | BMC Helix's ITSM AI capabilities include Proactive Problem Management and Incident Correlation, which will allow CRM's IT Service Desk to more efficiently identify issues, resolve incidents, automate case routing, and perform root cause analysis. | Increase IT service desk productivity, prevent issues before they occur, and decrease user downtime. | Prediction, recommendation | a) Purchased from a vendor | BMC | Yes | Prediction, recommendation | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / CRM | DOJ-0048 | Thomson Reuters CLEAR - License Plate Recognition | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | CLEAR LPR enables investigators to identify location history for license plates, connect addresses and individuals of interest to a vehicle's location and obtain images of a vehicle. CRM uses CLEAR LPR to aid in investigations. | Streamline investigations, reducing the need to search multiple platforms, while saving costs and allowing investigators to more quickly identify relevant information. | Object recognition, OCR, prediction | a) Purchased from a vendor | Thomson Reuters | No | Object recognition, OCR, prediction | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / CRM | DOJ-0050 | Veritone | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Improve transcription and translation of written, video, and audio files | Transcribe audio and video, saving manual review time and associated costs. | For text files that are transcribed into English the output contains the translated text. For Audio and video files a text file transcribing the translation is created as well as a closed captioning file. | a) Purchased from a vendor | Veritone | Yes | For text files that are transcribed into English the output contains the translated text. For Audio and video files a text file transcribing the translation is created as well as a closed captioning file. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / CRT | DOJ-0053 | AWS/cloud.gov - Network Routing | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Support public complaint submission by enabling network routing technology to optimize speed of routing and re-route traffic around any configuration issues. | Expected benefits: Helps sustain reliable networks access to CRT networked infrastructure by CRT staff and by Public users for complaint submissions. | Decision and action relating to network routing and load management. | a) Purchased from a vendor | Amazon | No | Decision and action relating to network routing and load management. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / CRT | DOJ-0054 | Azure Platform/Tools - Network Routing | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Decision and action relating to network routing and load management. | Expected benefits: Helps sustain reliable networks access to CRT networked infrastructure by CRT staff. | Decision and action relating to network routing and load management. | a) Purchased from a vendor | Microsoft | No | Decision and action relating to network routing and load management. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / CRT | DOJ-0055 | Camtasia | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Dynamic captions use AI tech within the local client software to convert speech to text and produce a transcript file. | The captioning is used based on user preference within their client app. The producing and retaining a transcript is disabled by CRT IT Policy. | Contemporaneous closed captioning. The transcription saving is disabled. | Contemporaneous closed captioning. The transcription saving is disabled. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CRT | DOJ-0056 | Cloudflare Turnstile | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Prevents bots from automatically submitting spam complaints through public web portal. | This should reduce the time required for reviewing reports submitted by the public, thus increasing efficiency in the report review process and improving public service. | Decision related to routing of a civil rights violation report submitted by a bot. | a) Purchased from a vendor | Cloudflare | Yes | Decision related to routing of a civil rights violation report submitted by a bot. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / CRT | DOJ-0057 | Dragon | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Use natural language processing to convert spoken audio to text for employees with vision or mobility limitations. | Increased accessibility. Employees with a disability are provided a reasonable accommodation. | Text transcription of spoken audio. Navigation of laptop operating system. | a) Purchased from a vendor | Nuance Communications | Yes | Text transcription of spoken audio. Navigation of laptop operating system. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / CRT | DOJ-0058 | Evidence.com | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | Use the auto-transcribe feature to transcribe body worn camera footage to text making it more easily searchable. Also use the redaction assistant feature to remove sensitive information from videos. | Improved speed of pre-processing workflows, faster identification of relevant audio, efficiency of investigations. | Text transcriptions, video with sensitive images redacted | Text transcriptions, video with sensitive images redacted | |||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0060 | DEA Drug Signature Program Models | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Understanding the manufacturing origin and distribution route of illicit drugs by analyzing the chemical composition of seized drugs is a core DEA task. The purpose of this use case is to use AI/ML techniques to develop, maintain, and improve models that enable designated forensic chemists to identify a notional geographic origin or notional manufacturing route of samples selected for DEA's Drug Signature Programs. | The solution supports designated forensic chemists at DEA to automate analyses and to more quickly identify trend changes regarding drug sample notional geographic origin or notional manufacturing route. | Designated forensic chemists responsible for a particular Signature Program are provided with the model's output - a notional geographic region of origin or a notional manufacturing route of samples -- which these forensic chemists then evaluate along with other available information to better understand drug trends. | c) Developed with both contracting and in-house resources | Yes | Designated forensic chemists responsible for a particular Signature Program are provided with the model's output - a notional geographic region of origin or a notional manufacturing route of samples -- which these forensic chemists then evaluate along with other available information to better understand drug trends. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0061 | LPR: DEA DEASIL Program | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | DEA needs an efficient and effective way to identify and track the movements of persons of interest based on vehicle license plates. | License Plate Readers (LPR) can be one important investigative tool to support cases. | This License Plate Reader (LPR) technology captures images of vehicle license plates, automatically converts the information they contain to characters, and can add metadata such as the time and location the image was captured. | a) Purchased from a vendor | Yes | This License Plate Reader (LPR) technology captures images of vehicle license plates, automatically converts the information they contain to characters, and can add metadata such as the time and location the image was captured. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0062 | LPR: State Partner | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | License Plate Readers (LPR) can be one important investigative tool to support understanding of drug markets, manufacturing, and distribution channels. | License Plate Readers (LPR) can be one important investigative tool to support cases. | This License Plate Reader (LPR) technology captures images of vehicle license plates, automatically converts the information they contain to characters, and can add metadata such as the time and location the image was captured. | a) Purchased from a vendor | No | This License Plate Reader (LPR) technology captures images of vehicle license plates, automatically converts the information they contain to characters, and can add metadata such as the time and location the image was captured. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0063 | LPR: Federal Partner | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | DEA needs an efficient and effective way to identify and track the movements of persons of interest based on vehicle license plates. | License Plate Readers (LPR) can be one important investigative tool to support cases. | This License Plate Reader (LPR) technology captures images of vehicle license plates, automatically converts the information they contain to characters, and can add metadata such as the time and location the image was captured. | a) Purchased from a vendor | No | This License Plate Reader (LPR) technology captures images of vehicle license plates, automatically converts the information they contain to characters, and can add metadata such as the time and location the image was captured. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0064 | LPR: Commercial Solution | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | License Plate Readers (LPR) can be one important investigative tool to support understanding of drug markets, manufacturing, and distribution channels. | License Plate Readers (LPR) can be one important investigative tool to support cases. | This License Plate Reader (LPR) technology captures images of vehicle license plates, automatically converts the information they contain to characters, and can add metadata such as the time and location the image was captured. | a) Purchased from a vendor | No | This License Plate Reader (LPR) technology captures images of vehicle license plates, automatically converts the information they contain to characters, and can add metadata such as the time and location the image was captured. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0065 | Friction Ridge Print Comparisons | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | a) High-impact | High-impact | Classical/Predictive Machine Learning | DEA, when conducting fingerprint analysis to identify individuals who may be connected to evidence, needs to be able to compare friction ridge prints to other prints within the boundaries of a case. Product enables linking of cases where individuals are not necessarily identified. | This use case saves time and provides information for human decision-making. | Outputs images and portions of print cards. | a) Purchased from a vendor | Yes | Outputs images and portions of print cards. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0066 | Automated Count of Items in Photos | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | DEA needs a way to validate the drug forms, shapes, sizes, and counts contained in drug seizure exhibits so that this information can be effectively used in court proceedings. | To ensure timely high confidence of counts, this use case allows forensic scientists to accelerate validation and enables quality control checks by serving as a unbiased count against submitted paperwork and manual counts. This will reduce a labor and resource intensive counting process. | Outputs recommended counts with image labels. | Outputs recommended counts with image labels. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0067 | Body Worn Camera (BWC) Audio-Video Software AI Tools | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0068 | Supply Chain Analytics | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | DEA has a mission to protect communities and save lives. It requires an understanding of global drug markets, manufacturing, and distribution channels as well as the impact of illicit drugs on communities and individuals. Supply chain analytics is a tool to further open and active investigations and protect the American public. | Supply chain information about goods associated with drug manufacturing and trafficking can further investigative leads and allow dedicated DEA personnel to track global trends, determine the impact of market forces, and understand import/export stakeholders. This is particularly critical for DEA's work on precursor chemicals used in the production of synthetic drug like fentanyl. | Supply chain analytical outputs vary by query and generally focus on specific markets or business entities. | a) Purchased from a vendor | No | Supply chain analytical outputs vary by query and generally focus on specific markets or business entities. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0069 | Data & Analytics: Database Tools | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | DEA needs a way to search, monitor, and analyze a wide variety of public-facing unstructured and structured data sources such as scientific literature, social media, dark web, news, public records, public internet forums, and more in furtherance of investigations. | This use case enables DEA to gain key insights at exponentially higher speeds with lower costs by fusing variety of data sources. Benefits include: operational insights, emerging threat detection, agent safety, public safety, improved mission-enabling services, etc. | Outputs vary by use case. | a) Purchased from a vendor | No | Outputs vary by use case. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0071 | Data & Analytics: Healthcare Fraud Detection | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | DEA needs a way to facilitate analysis of health care fraud and abuse, especially those impacting government health programs. Federal government healthcare insurance programs include Medicare, Medicaid, TRICARE, VA, and others. The Health Care Fraud and Abuse Control Program (HCFAC) was created to unite DOJ and HHS in their efforts to combat fraud. | Enhances the detection and prevention of health care fraud and abuse crimes within the context of DEA's mission. | Outputs include data visualizations, as well as trend, benchmark, and link analyses. | c) Developed with both contracting and in-house resources | No | Outputs include data visualizations, as well as trend, benchmark, and link analyses. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0072 | Data & Analytics: Threat and Security Incident Monitoring | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | DEA wants to improve its real-time collaboration, executive protection, and agent safety with data-driven situational awareness, and identify potential threats using entity resolution to analyze multiple commercial and government data feeds. | This use case enables DEA to gain key insights at exponentially higher speeds with lower costs by fusing variety of data sources on potential threats from both insiders and external actors and to assist internal monitoring of multiple devices for real-time collaboration. This use case provides entity resolution, taking information entered by DEA analysts and scanning data sources for likely matches with risk indicators, and returning those results to analysts for review. | Outputs vary by use case. | a) Purchased from a vendor | No | Outputs vary by use case. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0073 | Data & Analytics: Transportation OCR | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | DEA needs a reliable, accurate way to identify the cartel or other organization associated with a seized drug exhibit. | AI capabilities can enable DEA to identify cartels or others organizations associated with drug seizures. This could improve DEA's understanding of drug trends. | The technology can "fingerprint" drug seizures of various forms (e.g. packaging, powder, pills). The output is a series of best matches for experienced DEA personnel to evaluate for possible matches to associated organizations or details from previously collected drug seizures. | The technology can "fingerprint" drug seizures of various forms (e.g. packaging, powder, pills). The output is a series of best matches for experienced DEA personnel to evaluate for possible matches to associated organizations or details from previously collected drug seizures. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0081 | Data & Analytics: Chemistry Instrument Library | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | DEA needs a way to identify unknown drug samples, which can be done by comparing them against a library of known spectra for the best match. | For less commonly encountered compounds, this use case saves time and provides information for human decision-making. The comparison/matching also fulfills requirements to provide spectra from known/traceable materials. | Outputs a recommended list of best matches to be decided upon by the analyst and included in the case file as supporting evidence for the identified substances reported. | c) Developed with both contracting and in-house resources | Yes | Outputs a recommended list of best matches to be decided upon by the analyst and included in the case file as supporting evidence for the identified substances reported. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0082 | Data & Analytics: Gunshot Detection System | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Rapidly locate gunshots and activity of potential investigative interest to alert relevant law enforcement agencies. | Rapid detection of gunshots, which can help to decrease the time to respond to a violent crime. | Outputs sensor reports of gunshots and activity of investigative interest. | a) Purchased from a vendor | No | Outputs sensor reports of gunshots and activity of investigative interest. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0083 | Financial and Cryptocurrency Analysis: Federal Partner | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Intelligence Analysts and DEA agents require tools that enable them to accelerate triage, discovery, investigations, and reporting while preserving the context, control, and credibility necessary to make their intelligence actionable. These tools must also support the rapid processing of raw, primary source content to provide useful insights and facilitate the identification of patterns and relationships in financial transactions, including cryptocurrencies, to aid in drug trafficking and money laundering investigations. | Saves time and money by freeing up Intelligence Analysts to focus on higher-order analysis, while enhancing the detection and prevention of financial crimes related to ongoing investigations. | Provides summary materials, on-demand briefings from a chatbot, target profiles, and language search results, along with outputs such as recommended search results, link analysis, alerts, and data visualizations. | c) Developed with both contracting and in-house resources | No | Provides summary materials, on-demand briefings from a chatbot, target profiles, and language search results, along with outputs such as recommended search results, link analysis, alerts, and data visualizations. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0084 | Financial and Cryptocurrency Analysis: Commercial Solution | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Intelligence Analysts and DEA agents require tools that enable them to accelerate triage, discovery, investigations, and reporting while preserving the context, control, and credibility necessary to make their intelligence actionable. These tools must also support the rapid processing of raw, primary source content to provide useful insights and facilitate the identification of patterns and relationships in financial transactions, including cryptocurrencies, to aid in drug trafficking and money laundering investigations. | Saves time and money by freeing up Intelligence Analysts to focus on higher-order analysis, while enhancing the detection and prevention of financial crimes related to ongoing investigations. | Provides summary materials, on-demand briefings from a chatbot, target profiles, and language search results, along with outputs such as recommended search results, link analysis, alerts, and data visualizations. | c) Developed with both contracting and in-house resources | No | Provides summary materials, on-demand briefings from a chatbot, target profiles, and language search results, along with outputs such as recommended search results, link analysis, alerts, and data visualizations. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0086 | Controlled Substances Act (CSA): Automation of Reports and Consolidated Orders System (ARCOS) Data Summarization | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | DEA needs a way to automate the validation, summarization, and outlier identification of ARCOS data for further analysis. | To save time and effort of fully manual review of ARCOS data. | Outputs a list of recommendation with validated data with summary information on the detected outliers. | c) Developed with both contracting and in-house resources | Yes | Outputs a list of recommendation with validated data with summary information on the detected outliers. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0087 | Controlled Substances Act (CSA): Transaction Data Ranking | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | DEA needs a way to rank CSA transaction data based on the factors identified/calculated as part of a manual analysis. | To reduce time and effort of manually ranking. | Outputs a data visualization that highlights activity, based on certain factors, and includes a ranking. | c) Developed with both contracting and in-house resources | Yes | Outputs a data visualization that highlights activity, based on certain factors, and includes a ranking. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0088 | Intelligence Data Platform | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | DEA has a mission to protect communities and save lives. It requires a need to understand communications within criminal operations. Multi-language machine transcription of audio files from lawfully seized devices, authorized correctional facility, and other authorized communications with necessary English translation will filter massive audio files to relevant data for review. | Expedites investigations as audio files can be easily searched and filtered to quickly identify which parts of the conversations should be reviewed and interpreted officially by human translators and analysts for discovery purposes. | Outputs vary by use case. | a) Purchased from a vendor | PenLink PLX | Yes | Outputs vary by use case. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0089 | Call Center Management and Service Delivery Support | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | DEA wants to improve the effectiveness of the Diversion Control Registrant Call Center by automating the routing of incoming calls, providing answers to common questions for Diversion Control Registrant call center agents to identify and quickly propose steps toward resolution, and monitoring customer feedback. | To expedite responding to registrant needs with timeliness and consistency of DEA's customer service posture to DEA registrants. To decrease labor costs while maintaining high levels of customer service to DEA registrants. To capture needed improvements to service for implementation. | Outputs a recommended course of action for review by DEA staff along with a prioritization classification and provide customer response metrics. | Outputs a recommended course of action for review by DEA staff along with a prioritization classification and provide customer response metrics. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0090 | Generative AI R&D Sandbox | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | a) High-impact | High-impact | Generative AI | The purpose of this use case is to create a R&D environment to enable DEA to test and prototype Generative AI based solutions. | To provide a safe environment to prototype and test experimental AI based use cases. | Outputs vary based on the use cases. | a) Purchased from a vendor | NVIDIA | Yes | Outputs vary based on the use cases. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | ||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0091 | Instructional System Design Online Courses | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | DEA has a mission to protect communities and save lives. The purpose of this use case is to generate images for use in training materials. | To reduce the amount of time and resources needed to develop training aids and graphical inserts for online Computer Based Training (CBT). | Outputs graphical content based on the queries entered by users. | Outputs graphical content based on the queries entered by users. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0092 | Autonomous Drone Detection and Monitoring | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | DEA needs drones that can safely and effectively navigate autonomously | This use case supports human drone operators and pilots in deploying drones by collecting data. It minimizes the need for costly, extensive training and allows drone operators to focus on real-time operational needs. It supports post-deployment data analysis. | Outputs high-resolution imaging and thermal imaging data, video feeds, 3D mapping, reports and analytics on drones and its communication links. | a) Purchased from a vendor | No | Outputs high-resolution imaging and thermal imaging data, video feeds, 3D mapping, reports and analytics on drones and its communication links. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0093 | Nuclear Magnetic Resonance (NMR) Spectra Prediction | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | DEA has a mission to protect communities and save lives. Phase and baseline corrections are important processing steps in the analysis of Nuclear Magnetic Resonance (NMR) spectra. Deep learning achieves excellent results in recognition and segmentation tasks, supporting users with spectra processing and interpretation. | Fast processing and interpretation of nuclear magnetic resonance spectra. | Outputs predicted labels for the spectra. | Outputs predicted labels for the spectra. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0094 | Automated IT Services and Application Monitoring | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0100 | BriefCatch | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Addresses errors found in manual legal writing. | Aids attorneys with grammar and sentence structure to enable stronger written product. | Briefs, summaries, and any other written work product. | a) Purchased from a vendor | Lawcatch LLC | No | Briefs, summaries, and any other written work product. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0101 | Cybersecurity Defense Tools | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Addresses manual process of detecting unknown cybersecurity threats. | Certain tools are used Department-wide to provide endpoint detection; threat intelligence, analysis, and response; and related services. These tools help the Department more quickly identify and respond to threats and indicators of compromise from systems. | Recommendations related to potential IT security threats. | a) Purchased from a vendor | CrowdStrike, Zscaler, Splunk, Lookout, Palo Alto Networks, and Cisco Secure Network Analytics | Yes | Recommendations related to potential IT security threats. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0102 | eLitigation Tools | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Addresses errors and speed delays found in exclusively manual human review of voluminous electronic information. | Department of Justice components use electronic litigation (“eLitigation”) tools for a broad range of purposes, including support of investigations, litigation, and FOIA and Privacy Act processes. Most of these tools are commercial-off-the-self products that are commonly used outside of government, such as Everlaw, FOIAXpress, and Relativity. These tools increasingly integrate AI capabilities that can assist with tasks core to the mission of the Department, such as surfacing potentially discoverable information in voluminous collections of emails, text messages, or other electronic records; locating potentially inculpatory or exculpatory evidence in voluminous electronic data; and identifying material that may be appropriate for disclosure or withholding according to applicable legal rules and privileges. eLitigation tools can offer substantial benefits over exclusively human review of voluminous electronic information: they can be faster, more accurate and consistent, and more efficient. Please note: These tools are used in contexts that are high-impact, but the nature and details of AI uses vary, which may affect whether particular uses are high impact. | Outputs vary by use case. | a) Purchased from a vendor | CloudNine Law, Everlaw, Relativity, and Nuix. | Yes | Outputs vary by use case. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0103 | Digital Forensics | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Addresses key problems in processing digital data related to law enforcement including efficiency in processing digital evidence, automating time consuming tasks such as organizing and classifying data, and accuracy in evidence analysis. | Forensic analysis tools used to extract, analyze, search, and organize digital evidence and datasets. Increases the efficiency of extracting data from devices and of analyzing/searching for pertinent data within devices and datasets. | Outputs vary by use case. | a) Purchased from a vendor | These tools include, for example, Cellebrite, Magnet Axiom and Griffeye | Yes | Outputs vary by use case. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0105 | LexisNexis (AI assisted legal research) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Addresses manual process of conducting legal research. | Improves accuracy and efficiency of legal research. | Recommends caselaw and other legal materials (such as statutes, regulations, and scholarly articles) and, in some circumstances, an overview of the law in response to queries. | a) Purchased from a vendor | LexisNexis | Yes | Recommends caselaw and other legal materials (such as statutes, regulations, and scholarly articles) and, in some circumstances, an overview of the law in response to queries. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0106 | Percipio Skillsoft | a) Pre-deployment – The use case is in a development or acquisition status. | Human Resources | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Helps employees identify relevant training courses. | Interactive tool customizes learning environments for the workforce's individual needs. | Training recommendations. | Training recommendations. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0107 | ServiceNow | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | ServiceNow automates IT helpdesk tickets triage and classification and can handle routine inquires and simple tasks to free up resources to focus on higher priority issues. | To automate engagement with IT helpdesk personnel for IT requests. ServiceNow is a cloud-based platform that digitizes and automates workflows. It provides IT support to common and simple requests 24/7, and virtual agents handle common inquires that free staff for higher priority issues. Consistent processing could help reduce errors and increase efficiency. | Automated service requests Incident resolution recommendations Virtual agency responses Data insights and analytics reports Search results and recommendations Documented and categorized knowledge-based articles. | a) Purchased from a vendor | ServiceNow | Yes | Automated service requests Incident resolution recommendations Virtual agency responses Data insights and analytics reports Search results and recommendations Documented and categorized knowledge-based articles. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0109 | Westlaw (AI assisted legal research) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Addresses manual process of conducting legal research. | Performs legal research and improves accuracy and efficiency of legal research. | Recommends caselaw and other legal materials (statutes, regulations, scholarly articles, etc.) and, in some circumstances, an overview of the law in response to queries. | a) Purchased from a vendor | Thomson Reuters | Yes | Recommends caselaw and other legal materials (statutes, regulations, scholarly articles, etc.) and, in some circumstances, an overview of the law in response to queries. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / ENRD | DOJ-0111 | Parallel Search from CaseText | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / ENRD | DOJ-0112 | Qualtrics | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Qualtrics is a cloud-based service that can send out survey and record responses. ENRD uses it to send out questionnaires to its own employees to gather feedback on issues such as training needs and student feedback on training sessions. ENRD also uses Qualtrics to collect victim impact statements and to collect public input in environmental justice matters. Qualtrics includes a sentiment analysis feature, but ENRD has not used it. Sentiment analysis can review narrative responses from respondents and summarize the overall sentiment of a group of respondents and surface insights and issues from a large number of responses. | It can provide immediate analysis of respondent sentiment and surface insights. It can reduce the number of hours and effort to read and score narrative responses and summarize the overall opinion on the subject. | Survey responses and reports summarizing data from responses. | Survey responses and reports summarizing data from responses. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / ENRD | DOJ-0113 | SimplyFile | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | SimplyFile uses AI to learn a user’s email filing habits and then predict where they will want to move each email by displaying a list of suggested filing locations they can select from. This allows users to file their emails with a single button click, rather than having to click and drag them to the correct folders. | Faster and more accurate email filing | A sorted list of predicted filing locations (Outlook folders) | a) Purchased from a vendor | TechHit | Yes | A sorted list of predicted filing locations (Outlook folders) | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / ENRD | DOJ-0114 | Veritone | a) Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Improve transcription and translation of written, video, and audio files | Faster screening of large volumes of electronic materials or information that may be of interest in an investigation or discovery. | For text files that are transcribed into English the output contains the translated text. For Audio and video files a text file transcribing the translation is created as well as a closed captioning file. | For text files that are transcribed into English the output contains the translated text. For Audio and video files a text file transcribing the translation is created as well as a closed captioning file. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / EOUSA | DOJ-0116 | Critical Mention | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | Identification of public data | Reduction in time required to update | Summary of publicly available information | a) Purchased from a vendor | Critical Mention | No | Summary of publicly available information | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / EOUSA | DOJ-0117 | Evidence.com | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | Inefficient and costly manual evidence review | Cost Savings, reducing court preparation times | Reports, narratives, and summaries | a) Purchased from a vendor | Axon | No | Reports, narratives, and summaries | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / EOUSA | DOJ-0118 | Flashpoint Ignite | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / EOUSA | DOJ-0119 | Insider Threat Management and User Activity Monitoring | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | Identification and prevention of anomalous user behavior too subtle for human detection and analysis | Usage of this system proactively identifies and prevents costly data breaches, reducing incident response time, and gives EOUSA and the USAO comprehensive insight of user behavior to maintain security and compliance | Analytics and predictive models | a) Purchased from a vendor | No | Analytics and predictive models | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0120 | Object Classification Tool - Field Office Security Camera | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Object anomaly detection | Enhanced security | Security notifications based on anomaly detection | a) Purchased from a vendor | AI Model Provider | No | Security notifications based on anomaly detection | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0121 | Data Synthesis, Sentiment, Filtering, and Location Linking | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Data triage and analysis | Enhanced threat information detection | Tagged data for further confirmation, research, and analysis | a) Purchased from a vendor | No | Tagged data for further confirmation, research, and analysis | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0122 | Facial Recognition Technology | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Facial recognition | Generation of investigative leads | Potential leads through suggested facial matches | b) Developed in-house | No | Potential leads through suggested facial matches | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0123 | OCR and Translation | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Transcription and translation | The AI will help the FBI digitize data. | Digital text | a) Purchased from a vendor | Yes | Digital text | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0124 | License Plate Reader 1 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | License plate reader technology assists in locating vehicles associated with persons of interest. | Program uses character recognition to read and identify license plates. | Video | a) Purchased from a vendor | AI Service Provider | No | Video | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0125 | Enterprise Telecommunications Information System | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Speech to text. | Reduced customer wait time. | text | a) Purchased from a vendor | Yes | text | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0126 | Facial Recognition Technology and Data Mapping Tools | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Facial recognition of open-source images | Generation of investigative leads | Potential matches for human review | a) Purchased from a vendor | No | Potential matches for human review | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0127 | Human Language Extraction and Translation Technology | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Machine translation of documents and digital files | Faster FBI operations | Extracted and transcribed / translated text from documents and digital files | c) Developed with both contracting and in-house resources | AI Service Provider | Yes | Extracted and transcribed / translated text from documents and digital files | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0128 | Attrition and Background Models Capability | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0129 | Audio and Video Recording Management Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Prevents loss of required audio and video data within FBI custodial interview rooms | Tool facilitates audio and video recording within FBI custodial interview rooms, consistent with DOJ and FBI policy. | Audio and video recordings | a) Purchased from a vendor | No | Audio and video recordings | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0130 | License Plate Reader 2 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | License plate reader technology assists in locating vehicles associated with persons of interest. | Program uses character recognition to read and identify license plates. | Video | a) Purchased from a vendor | AI Service Provider | No | Video | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0131 | License Plate Reader 3 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | License plate reader technology assists in locating vehicles associated with persons of interest. | Program uses character recognition to read and identify license plates. | Video | a) Purchased from a vendor | AI Service Provider | No | Video | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0132 | National Crime Information Center | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Name matching | The models can produce much quicker searches on word embeddings than they would otherwise be able to do, and can generate a larger return of similar phrases or misspellings. | Better search results | Better search results | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0133 | National Data Exchange | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Helps uncover valuable insights and connections in the text of criminal justice and law enforcement related information in the system. | The benefit of this use case is to provide an entity extraction feature to aid N-DEx searches so that users can accurately search for people and filter the search results based on a specified role type. | Person entity information from narrative text for lead generation | a) Purchased from a vendor | Yes | Person entity information from narrative text for lead generation | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0134 | National Instant Criminal Background Check System | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Time-intensive review of database search results | Improved quality of search results for NICS analysts when they examine state law databases for laws relevant to a NICS background check. More effective searches on word embeddings will better identify patterns in the data. | Potential leads for human review | Potential leads for human review | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0135 | Next Generation Identification | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The AI is intended to improve biometric and name-based matching for identification and investigation services. | The AI provides more accurate biometric and name-based matching | Biometric identification and search results containing candidates for potential investigative leads. | a) Purchased from a vendor | Yes | Biometric identification and search results containing candidates for potential investigative leads. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0136 | TIPS | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | It is intended to prioritize tips to be worked as well as determine if a tip should get a second human review. | The AI used in this case helps to triage immediate threats in order to help FBI field offices. | The system will prioritize the tips or route them to a second review based on thresholds. | c) Developed with both contracting and in-house resources | Yes | The system will prioritize the tips or route them to a second review based on thresholds. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0137 | Facial Recognition Technology and Data Mapping Tools - 2 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Identification of sex trafficking victims | Generation of investigative leads | Potential leads through suggested facial matches | a) Purchased from a vendor | No | Potential leads through suggested facial matches | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0138 | Language Translation, Optical Character Recognition (OCR), Object Detection, Language Detection, Alert Noise Reduction | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Autonomous Detection and Monitoring | Increased analytic capability | Capabilities to support data and analytics, data synthesis, filtering, and linking of open source & threat intelligence. | a) Purchased from a vendor | AI Service Provider | No | Capabilities to support data and analytics, data synthesis, filtering, and linking of open source & threat intelligence. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0142 | ASCVD (Atherosclerotic Cardiovascular Disease) Risk Estimator | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The ASCVD is used as a preventative measure to identify inmates that are at risk for heart disease in order to provide more aggressive treatment to reduce that risk. | The expected outcome would be to have less heart disease or related complications due to the proactive assessment of potential risk. | An estimated percentage of the possibility of the inmate developing heart disease over the next ten years and over the inmate's lifetime. | a) Purchased from a vendor | American College of Cardiology | No | An estimated percentage of the possibility of the inmate developing heart disease over the next ten years and over the inmate's lifetime. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0143 | Automated Medication Dispensing Cabinet | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | To log into the system faster and more securely by utilizing a fingerprint scanner. | To prevent unauthorized access to the medications held in the cabinet. | Assesses whether the fingerprint scanned matches the one in the system for each specific employee. | a) Purchased from a vendor | BD Pyxis | Yes | Assesses whether the fingerprint scanned matches the one in the system for each specific employee. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0144 | Automated Staffing Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | It will assess staffing levels within the FBOP. | The expected benefit of the AI use case is to maximize cost effectiveness of staffing needs for each institution. | Output reports provide how many positions are currently authorized within the FBOP. | a) Purchased from a vendor | Microsoft | No | Output reports provide how many positions are currently authorized within the FBOP. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0145 | Aztec Learning Software | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Provide learners with data driven course work in Aztec LMS. | Prediction: Reports of trends, new authentic. | Implementation and Assessment – The AI system associated with the use case is currently undergoing functionality and security testing. | Implementation and Assessment – The AI system associated with the use case is currently undergoing functionality and security testing. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0146 | BRAVO Classification | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Uses statistical techniques to predict potential for misconduct for newly admitted inmates. In turn, this prediction is used to assign appropriate security levels. | Correctly classifying inmates' security level will decrease the level of misconduct towards other inmates and staff. | AUC score showing the degree to which the instrument correctly discriminates between those who commit misconduct and those who do not. | a) Purchased from a vendor | SAS | Yes | AUC score showing the degree to which the instrument correctly discriminates between those who commit misconduct and those who do not. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0147 | Building Automation Systems | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | An automatic control of a building's HVAC lighting and other systems through a centralized building management system. | Reduce energy consumption and waste, monitor performance, and alerts for device failures. | It will result in alerts. | a) Purchased from a vendor | No | It will result in alerts. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0149 | Community Treatment Pipeline Screening | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | This is an FBOP-developed tool to identify individuals due to be released to the community who require clinical review for additional treatment services once released into pre-release confinement. | Reduces clinical reviews by a third by prescreening those with no SENTRY or BEMR indicators necessitating potential treatment needs. | Checkmarks for potential types of treatment needs found in SENTRY or BEMR. | a) Purchased from a vendor | SAS | Yes | Checkmarks for potential types of treatment needs found in SENTRY or BEMR. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0150 | Descript | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | AI synchronizes audio files to video | Video to be used for communications and training purposes | Output is an video file (e.g., MP4) | a) Purchased from a vendor | Descript | No | Output is an video file (e.g., MP4) | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0153 | Google Translate | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | FBOP employees have periodically used Google Translate to translate documents and notifications for the Spanish speaking population from English to Spanish. | Translating documents from English to Spanish allows for the Spanish speaking population the opportunity to stay well informed with the events and details of the institution. | Google translate provides a word for word translation of the information entered though it does not always accurately translate in a given context. | a) Purchased from a vendor | No | Google translate provides a word for word translation of the information entered though it does not always accurately translate in a given context. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0155 | InterQual | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0156 | Medical Claims Adjudication | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Uses Quantum Choice (QC) to adjudicate medical claims and analyze the data within the system. | Cost savings and ensure compliance with billing regulations and contract pricing terms. It also provides data analysis to assist with the FBOP's mission. | Medical Billing Payment decisions are made utilizing the AI. | a) Purchased from a vendor | Quantum Choice from Plexis | No | Medical Billing Payment decisions are made utilizing the AI. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0157 | Medical Designations Calculator | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Initial medical and mental health care levels. | Reducing the time to determine a final medical and mental health care level. | A medical and mental health care level score (1-4) | a) Purchased from a vendor | SAS | No | A medical and mental health care level score (1-4) | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0158 | NLETs Arrest Classification | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Uses statistical techniques to classify text descriptions of arrests | Correctly classifying arrest records to inform recidivism assessments | Classifies arrests into offense types | a) Purchased from a vendor | SAS | No | Classifies arrests into offense types | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0159 | Pathfinder | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | To assist FBOP employees with career pathways. | The system generates assessments based on user input. It scores the assessments taken to provide a user with options for career pathways. | Provides employees with career options based on assessments. | a) Purchased from a vendor | Azure | Yes | Provides employees with career options based on assessments. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / OJP | DOJ-0160 | Prisoner Assessment Tool Targeting Estimated Risk and Needs (PATTERN) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Agentic AI | The intended use is to predict the risk of recidivism for incarcerated adults. | Addressing the risk of recidivism with appropriate programming and services to reduce the likelihood of reengagement with the justice system. | Recidividism risk calculations. | b) Developed in-house | No | Recidividism risk calculations. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0160 | Prisoner Assessment Tool Targeting Estimated Risk and Needs (PATTERN) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The tool uses pre-defined rules to score an inmate's recidivism risk level. | The tool provides the FBOP a recidivism risk instrument which objectively assesses an inmate's current level of risk for re-offending. | Recidivism risk score. | b) Developed in-house | Yes | Recidivism risk score. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0161 | Psychological Test Interpretation - Pearson Assessments | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Psychological test interpretation assistance. | Assistance during psychological evaluations and treatment planning. | Text interpretive reports. | a) Purchased from a vendor | Pearson Assessments | Yes | Text interpretive reports. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0162 | reCAPTCHA | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Uses reCAPTCHA to distinguish between a human and bot request. | It provides verification and security to our websites. | Decision: It determines if the request is coming from a human or bot to allow the request to be submitted via email to the FOIA office or through the Inquiry Portal. | a) Purchased from a vendor | Yes | Decision: It determines if the request is coming from a human or bot to allow the request to be submitted via email to the FOIA office or through the Inquiry Portal. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0163 | Static-99 Sex Offender Data System (SODS) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Static-99 is used as one data point in a 10 point manual process to evaluate FBOP sex offender's risk for recidivism. The tool uses pre-defined rules to score an inmate's recidivism risk level. | To assist the evaluator in the recidivism risk review process by providing a key data point used in the overall evaluation. | A score indicating the inmate's risk for recidivism of a sexual offense. | b) Developed in-house | Yes | A score indicating the inmate's risk for recidivism of a sexual offense. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0164 | The R Project for Statistical Computing | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0165 | Thomson Reuters Drafting Assistant Tool for MS Word | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0166 | TruNarc, Smith Detection | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | This AI will test suspected mail for narcotics. | It is utilized to confirm staff visual identification of narcotic through institution mail. | This system will provide reports on test results. | a) Purchased from a vendor | TruNarc, Smiths Detection | No | This system will provide reports on test results. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0167 | Trunet Systems | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Uses biometric (fingerprints and vocal) identification. | It is an additional security feature that allows only that particular adult in custody (AIC) access to their individual account. | This technology is used to provide inmates access to their Trust Fund Account information | a) Purchased from a vendor | Advanced Technologies Group | Yes | This technology is used to provide inmates access to their Trust Fund Account information | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0168 | Truview | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Analysis of data held within the TRU, visiting, case management, and volunteer systems. | To assist with the FBOP's mission by completing assessments and analysis of the data input into the TRU, visiting, case management, and volunteer systems. | Provides users with actionable information for investigative purposes. | a) Purchased from a vendor | Advanced Technologies Group | Yes | Provides users with actionable information for investigative purposes. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0169 | UAS Threat Detector | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Detects and analyzes unmanned aerial systems (UAS) near FBOP facilities to determine if the UAS is a threat to an institution. Provides reliable threat detection as part of FBOP’s overall mission to provide safe and secure facilities. | To maintain security of FBOP institutions. The benefits of such detections serve FBOP’s mission to protect society by confining offenders in the controlled environments of prisons and community-based facilities that are safe and appropriately secure. | Outputs provide UAS identification and location information. | a) Purchased from a vendor | No | Outputs provide UAS identification and location information. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0170 | UpToDate | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | This tool is used for research for clinical practice guidance. The user can submit a condition and the tool compiles a list of treatments and information that is currently being recommended in the medical field. | Improved Patient Outcomes | Information, summaries, links to published peer-reviewed research, and treatments. | a) Purchased from a vendor | UpToDate | No | Information, summaries, links to published peer-reviewed research, and treatments. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0171 | Veritone | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0172 | Wellsaid | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | AI converts text to speech audio that is used in social media, training videos, and audiobooks. | Audio to be used for communications and training purposes | Output is an audio file (e.g., MP3) | a) Purchased from a vendor | Wellsaid | No | Output is an audio file (e.g., MP3) | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0173 | Perimeter Detection Fence (FLIR) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD | DOJ-0174 | Exiger Supply Chain Risk Management - DDIQ Research Engine | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Provides DOJ with capabilities to perform Supply Chain Risk Management assessments to support agency cybersecurity posture. | Provides DOJ vendor profiles that aggregate data from open source repositories using API keys. This data gives DOJ information to make an informed decisions on whether or not to move forward with the acquisition of a goods/services based on the risks identified during research. | List of outputs include additional data and risk scores such as Foreign Ownership Control or Influence (FOCI), Reputational Criminal Regulatory (RCR), Financial Health Risk | a) Purchased from a vendor | Exiger | Yes | List of outputs include additional data and risk scores such as Foreign Ownership Control or Influence (FOCI), Reputational Criminal Regulatory (RCR), Financial Health Risk | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / JMD | DOJ-0175 | Exiger Supply Chain Risk Management - DDIQ Due Diligence Analytics | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Provides DOJ with capabilities to perform Supply Chain Risk Management assessments to support agency cybersecurity posture. | Provides DOJ with analytical risk dashboards and company cybersecurity scorecards which provides DOJ with capabilities to make an informed decisions on whether or not to move forward with the acquisition of a goods/services based on the risks identified during research. | List of outputs include additional data and risk scores such as Foreign Ownership Control or Influence (FOCI), Reputational Criminal Regulatory (RCR), Financial Health Risk | a) Purchased from a vendor | Exiger | Yes | List of outputs include additional data and risk scores such as Foreign Ownership Control or Influence (FOCI), Reputational Criminal Regulatory (RCR), Financial Health Risk | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / Department wide | DOJ-0177 | CoPilot | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Features assist with calendaring, meeting summaries, and email drafting. | Reduces administrative time for user. | Meeting summaries and proposed email responses. | Meeting summaries and proposed email responses. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD | DOJ-0179 | Savan Group Intelligent Records Consolidation Tool | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / NSD | DOJ-0181 | Salesforce | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Addresses the problem of complex data analysis and decision making by providing AI-driven capabilities that simplify and accelerate the process. | Purpose: Provides contextual insights and AI-powered predictions and insights to drive engagement and focus directly in the flow of work in Salesforce. This license also includes AI features, which are not in use at this time. Expected benefits: NSD procured CRM Analytics Plus licenses for the use of Tableau for reporting. | Drive outcomes at scale and get answers to inform. | Drive outcomes at scale and get answers to inform. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0184 | Enhanced Proactive Financial Analysis Techniques for Fusion Desktop | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Deploys predictive analytics to enhance recognition of, and highlighting of, fraudulent financial activity across HSTF NCC financial and criminal holdings. | Improve efficiencies by identifying and prioritizing financial fraud activity tied to open criminal investigations by including a broader data set and across multiple judicial districts to develop a better whole-of-US picture of financial fraud. | List of subjects with open criminal investigations which would be worth additional manual review for financial ties. | List of subjects with open criminal investigations which would be worth additional manual review for financial ties. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0185 | FinCEN Data Summarization | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Generative AI | Analyze a SAR through generative AI functionality | Speed up production times for providing investigative support to the field. It also makes analysts more available to perform deeper analysis. | Summarization of applicable FinCEN suspicious activity reports (SARs). | Summarization of applicable FinCEN suspicious activity reports (SARs). | |||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0186 | Generation of Graphs and Charts | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Create charts and graphs by describing the requested data to integrate. | Charts and graphs aide in reporting of HSTF investigations. Enabling this technology will reduce tremendously reduce the time which was used for formatting traditional Microsoft products. | Recommendation Provide sample data to create charts and graphs to populate | Recommendation Provide sample data to create charts and graphs to populate | ||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0187 | Generation of Large Test Data Sets | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Using generative AI modeling, we will create large volumes of test data so that quality assurance, external audits and penetration testing, system demos & integration, etc., can use that generated test data at scale. | Developing and operating with a test set of data will enable engagements between HSTF and external groups, where manual generation of test data at scale is not feasible. | Test Data Sample and/or mock up of test data from the applications used within the environment | Test Data Sample and/or mock up of test data from the applications used within the environment | ||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0188 | Machine Learning for Decision Support for Fusion Desktop | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | To take into account various criteria such as analyst workloads, similarities in case narratives and targets, and regional factors to suggest the optimal routing and assignment for product requests. | Improve efficiencies in routing request for HSTF Products to send to the analysts and/or analytic units most suitable for working the product. Reduce rework and streamline the approvals process by suggesting agency approvers based on automated review of the product's content and ensuring all appropriate agencies are consulted for manual review. | Suggested assignment / routing for HSTF NCC Product Requests (e.g. to an analytic unit or specific analyst). Suggested approver assignments based on referenced agency data. | Suggested assignment / routing for HSTF NCC Product Requests (e.g. to an analytic unit or specific analyst). Suggested approver assignments based on referenced agency data. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0189 | Machine Learning for Decision Support for DOJ MIS | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | Identify proposed HSTF investigations for prioritized review, approval, and potential case designation based on pre-determined success criteria. | Improve efficiencies in routing HSTF proposals through the workflow to receive HSTF designation. Identify the best use of resources through machine learning trained on investigations matching emerging threats. | Lists of proposed HSTF investigations in prioritized order. | Lists of proposed HSTF investigations in prioritized order. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0190 | Narrative Analysis for Fusion Desktop | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Will create summaries or precis from large volumes of text narratives submitted on Fusions Desktop forms, for both information triage and trend analysis / emerging threat detection purposes. Gen AI would also allow interactive queries (e.g. you ask it questions) about the data it is summarized, and would be transparent (with citations of underlying data as needed). | Provide rapid review of information submitted as data ingested into the Fusion Desktop database. This will reduce the time necessary during data entry and review and will provide guidance on emerging threat areas which HSTF NCC may use to determine resource allocation or areas for additional study. | Summary of long narrative text (on a per form basis). Summary of trends common across multiple forms' narrative texts. | Summary of long narrative text (on a per form basis). Summary of trends common across multiple forms' narrative texts. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0191 | Narrative Analysis for DOJ MIS | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Will create summaries or precis from large volumes of text narratives submitted on HSTF forms, for both information triage and trend analysis / emerging threat detection purposes. Gen AI would also allow interactive queries (e.g. you ask it questions) about the data it has summarized, and would be transparent (with citations of underlying data as needed). | Provide rapid review of information submitted on DOJ MIS forms, including Investigation Initiation Forms (IIFs) and interim and final updates on open investigations. This will reduce the time necessary during data entry and review for forms and will provide guidance on emerging threat areas which HSTF NCC may use to determine resource allocation or areas for additional study. | Summary of long narrative text (on a per form basis). Summary of trends common across multiple forms' narrative texts. | Summary of long narrative text (on a per form basis). Summary of trends common across multiple forms' narrative texts. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0192 | Train ML on Intelligence Analyst Best Practices | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | Train AI on exemplary analyst work to suggest course of action to all IAs while they perform product research and development. | Improve efficiencies in developing HSTF NCC Products by providing decision support and limited automation by ML trained on exemplar analysts at the NCC. Provides guidance and suggestions particularly for resource-intensive actions, like conducting open source, commercial, or offline (swivel-chair) searches of data sets outside the NCC. | Suggested actions at each phase of HSTF NCC Product development workflow. | Suggested actions at each phase of HSTF NCC Product development workflow. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / OIG | DOJ-0193 | Speech to Text Managed Service - Voice Transcription to Text | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Reducing turnaround times for transcribed services and to reduce the need for contracted transcription services of audio files | Cost savings towards less time and materials for an individual to fully process the recording and quicker turnaround of transcription delivery. | Speech-to-text recognition of spoken content within an audio file. In addition, a summary of the transcribed content can be created. | c) Developed with both contracting and in-house resources | Microsoft, OpenAI | Yes | Speech-to-text recognition of spoken content within an audio file. In addition, a summary of the transcribed content can be created. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / OIG | DOJ-0194 | Inspection Productivity Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | a) High-impact | High-impact | Generative AI | Reduce the time involved in analyzing draft reports and content recommendations in compliance with published standards, guidance, and rule books. | Quicker turn around for creating reports, analysis of reports to ensure consistency in content and removal of redundancy, and to assist in the management of report length. | Suggested phrases to reword a paragraph and/or identification of errors in statements as they align towards defined standards, guidance, or rule books. | Suggested phrases to reword a paragraph and/or identification of errors in statements as they align towards defined standards, guidance, or rule books. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / OIG | DOJ-0195 | Internal Component-specific chatbot service | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Provide human-like conversational responses to conduct general information queries and suggestions towards improving text without feeding into a commercially exposed AI model. | Improve work efficiencies and curve user generative AI usage to an environment that is controlled and contained within GCCH and the DOJ OIG subscription vice a commercial service. Furthermore, the information and models generated within the GCCH environment stays within GCCH and do not further the model of commercial products or services. | The AI provides generative AI human-like responses/answers to questions and/or statements from a user. In addition, the output can be a generated summary of an inputted excerpt or publicly available information. | a) Purchased from a vendor | Microsoft, OpenAI | Yes | The AI provides generative AI human-like responses/answers to questions and/or statements from a user. In addition, the output can be a generated summary of an inputted excerpt or publicly available information. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / OIG | DOJ-0196 | Dragon | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Automated word processing dictation and speech transcription. The application is speech recognition software that initiates commands on a device or takes dictation into a word processing application. The application assists individuals with reasonable accommodations related to and/or difficulty typing, seeing, or navigating a Windows operating system environment. | Provide reasonable accommodations related to and/or difficulty with typing, seeing, or navigating a Windows operating system environment to be productive throughout the workday. | Transcription of dictated words into a word processing document or initiation of macro commands in the Windows operating system environment. | a) Purchased from a vendor | Nuance Communications (owned by Microsoft) | Yes | Transcription of dictated words into a word processing document or initiation of macro commands in the Windows operating system environment. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progess | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||
| Department Of Justice | Department of Justice / OIG | DOJ-0197 | Informatica - CLAIRE | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Manual scanning and cataloging of data sets related to data analytics' mass data correction, business rules translation, data/column similarity, data anomaly detection, data relationship inference, data domain inference, data volume projections, cost of data breach, natural language description of code, business term associations, schema mapping, entity extraction, smart data visualization, and economic value of data. | Informatica CLAIRE engine will help with cataloging enterprise data quickly, classify and organize OIG data. It will also automate the data curation and connect data across OIG from disparate sources. It will also track data movement from system views to column-level lineage. | Provide machine learning-based discovery to scan and catalog data assets across the OIG. Enterprise Data Catalog provides intelligence by leveraging metadata to deliver recommendations, suggestions, and automation of data management tasks. | a) Purchased from a vendor | Informatica | Yes | Provide machine learning-based discovery to scan and catalog data assets across the OIG. Enterprise Data Catalog provides intelligence by leveraging metadata to deliver recommendations, suggestions, and automation of data management tasks. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / OIG | DOJ-0198 | Nlets Nationwide License Plate Reader Pointer Index | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Law enforcement usage of Optical Character Recognition (OCR) to assist with license plate reading under DOJ Policy. LPR data is managed and maintained by other entities for law enforcement purposes. | Quick notification of results from license plate queries. | JWIN facilitates access to the Nlets Nationwide License Plate Reader Pointer Index, which provides access to states and/or federal agencies that maintain their own LPR repositories. LPR data is managed and maintained by other entities for law enforcement purposes. | a) Purchased from a vendor | Thomson Reuters | Yes | JWIN facilitates access to the Nlets Nationwide License Plate Reader Pointer Index, which provides access to states and/or federal agencies that maintain their own LPR repositories. LPR data is managed and maintained by other entities for law enforcement purposes. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / OIG | DOJ-0199 | Axon | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | No AI utilized. But, the use case captures raw video and audio footage that are accessed only to the extent needed for prosecution or investigation purposes. AI functionalities for analysis, voice transcription, and redaction are not utilized or relevant for our use. | This use case does not utilize AI functions. AI features are included in the product, but have not been installed and not relevant for our use. | No AI system outputs. Standard outputs are archival raw videos and audio footage. | a) Purchased from a vendor | Axon | Yes | No AI system outputs. Standard outputs are archival raw videos and audio footage. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / OIG | DOJ-0200 | SAS Enterprise Miner - Grant risk assessment model | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / OJP | DOJ-0201 | MACO Project: Law Enforcement CAD Data Autocoder | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | MACO is intended to solve the lack of standardization in Computer-Aided Dispatch (CAD) data across law enforcement agencies. CAD event descriptions are free-text, highly variable, and differ widely in terminology, structure, and coding practices. Because of this inconsistency, it is currently not possible to aggregate, compare, or analyze CAD data across jurisdictions at scale. MACO uses machine learning and language models to automatically classify raw CAD text into a standardized event taxonomy, enabling consistent analysis, cross-agency comparisons, and the development of national estimates of calls for service. | The AI provides standardized classifications of CAD event text, allowing BJS and partner agencies to analyze police activity consistently across jurisdictions. For BJS, this enables the production of scalable, comparable national estimates of calls for service—filling a major data gap not addressed by traditional crime measures. For state and local agencies, the standardized schema improves internal organization of CAD data and supports regional or state-level comparisons of police workload and community needs. For the research community and the public, MACO expands understanding of how law enforcement resources are used, the types of events agencies respond to, and broader patterns of community demand for police services. Overall, the tool enhances data quality, improves analytic capacity, and supports evidence-based decision-making across the criminal justice ecosystem. | The system outputs a standardized event-type classification for each CAD record. For each raw text description, the model generates a predicted category from a predefined event taxonomy (e.g., “Property Crime: Theft,” “Traffic Incident,” “Disturbance,” etc.). The final deliverable is a CAD dataset with these standardized classifications appended to each record, enabling consistent analysis and aggregation across agencies. | The system outputs a standardized event-type classification for each CAD record. For each raw text description, the model generates a predicted category from a predefined event taxonomy (e.g., “Property Crime: Theft,” “Traffic Incident,” “Disturbance,” etc.). The final deliverable is a CAD dataset with these standardized classifications appended to each record, enabling consistent analysis and aggregation across agencies. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / OJP | DOJ-0203 | Offense Text Auto-Coder (OTAC) - Automated offense coder from offense charge text strings used for the BJS National Pretrial Reporting Program | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | The purpose of this tool is to improve description and comparability of offense charges across jurisdictions. When using justice administrative data from various jurisdictions (localities, states, and federal), the way offenses are described (i.e., the exact text strings used) vary greatly. For example, assault and battery may be spelled out, or abbreviated in novel ways such as A&B, A & B, A+B, A?B, battery & aslt, etc. This tool is used to facilitate grouping identical concepts under one common set of offense codes. The data that BJS makes available will be aggregated or deidentified. | Improved comparisons of criminal justice data | The output of this autocoder is common definitions for offense charge classification. It has been trained on an offense crosswalk for the BJS National Corrections Reporting Program (NCRP) (https://www.icpsr.umich.edu/files/NACJD/ncrp/Offense_Code_Crosswalk.xlsx) to convert plain-text offense descriptions into classifications routinely used by the NCRP. | b) Developed in-house | Yes | The output of this autocoder is common definitions for offense charge classification. It has been trained on an offense crosswalk for the BJS National Corrections Reporting Program (NCRP) (https://www.icpsr.umich.edu/files/NACJD/ncrp/Offense_Code_Crosswalk.xlsx) to convert plain-text offense descriptions into classifications routinely used by the NCRP. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / OJP | DOJ-0205 | Rapid Offense Text Autocoder (ROTA) - Automated offense coder from offense charge text strings used for the BJS Criminal Cases in State Courts | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | The purpose of this tool is to improve description and comparability of offense charges across jurisdictions. When using justice administrative data from various jurisdictions (localities, states, and federal), the way offenses are described (i.e., the exact text strings used) vary greatly. For example, assault and battery may be spelled out, or abbreviated in novel ways such as A&B, A & B, A+B, A?B, battery & aslt, etc. This tool is used to facilitate grouping identical concepts under one common set of offense codes. The data that BJS makes available will be aggregated. | Improved comparisons and analysis of criminal justice data. | The output of this autocoder is common definitions for offense charge classification. It has been trained on an offense crosswalk for the BJS National Corrections Reporting Program (NCRP) (https://www.icpsr.umich.edu/files/NACJD/ncrp/Offense_Code_Crosswalk.xlsx) to convert plain-text offense descriptions into classifications routinely used by the NCRP. | b) Developed in-house | No | The output of this autocoder is common definitions for offense charge classification. It has been trained on an offense crosswalk for the BJS National Corrections Reporting Program (NCRP) (https://www.icpsr.umich.edu/files/NACJD/ncrp/Offense_Code_Crosswalk.xlsx) to convert plain-text offense descriptions into classifications routinely used by the NCRP. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / OJP | DOJ-0206 | Research Abstract Screening for CrimeSolutions | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / PAO | DOJ-0208 | Adobe | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | DOJ needs to communicate effectively internally and externally. This tool, can support image production for communication purposes. | Adobe has function that enables the generation of images. PAO does not use this function/feature. | Images, if feature were in use | Images, if feature were in use | ||||||||||||||||||||
| Department Of Justice | Department of Justice / PAO | DOJ-0210 | Hootsuite | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Optimizes social media posting | PAO uses Hootsuite to schedule and manage social media products. Hootsuite also includes AI-powered social listening features, but PAO does not use those features. It can advise on when the best time to post would be for maximum engagement, which will help promote our message. | Recommendations about date and time to publish content | a) Purchased from a vendor | Hootsuite | Yes | Recommendations about date and time to publish content | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / PAO | DOJ-0212 | Veritone Digital Media Hub | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Expedites the identification and labeling of photos. | The Veritone Digital Media Hub includes AI features that allow PAO to search our event photo databases and identify objects in those photo catalogs. Increased efficiency of searches of archival images so that PAO can find and continue using assets. | Recommendations of images that match the terms searched for | a) Purchased from a vendor | Veritone | Yes | Recommendations of images that match the terms searched for | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / PARDON | DOJ-0213 | Lexis Nexis (People Search) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Improve the accuracy and comprehensiveness of a clemency applicant's personal data. | Allows the retrieval of personal data for an individual such as historical addresses. Improve the accuracy and comprehensiveness of a clemency applicant's personal data. | Summary | a) Purchased from a vendor | Lexis Nexis | Yes | Summary | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / PARDON | DOJ-0215 | Pacer Search | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | Validate identity | The service provides more confidence that the correct person has been identified | PII about the individual. | a) Purchased from a vendor | Techsmith | No | PII about the individual. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / PARDON | DOJ-0217 | Westlaw (People Search) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Improve the accuracy and comprehensiveness of a clemency applicant's personal data. | Allows the retrieval of personal data for an individual such as historical addresses. Improve the accuracy and comprehensiveness of a clemency applicant's personal data. | Summary | a) Purchased from a vendor | Westlaw | Yes | Summary | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0220 | AWS Transcribe | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Creates machine transcription of audio or video to facilitate review and evaluation of evidence. | Machine transcription facilitates faster review of data, decreasing time spent listening to and evaluating audio and video files. Saves funds that would otherwise be spent on transcription vendors. | Translated text | a) Purchased from a vendor | Amazon Web Services | Yes | Translated text | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0221 | AWS Translate | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Creates machine translation of foreign language documents to facilitate review and evaluation of evidence. | Machine translation facilitates faster review of foreign language documents. Permits selection of key documents to be sent to vendors for evaluation and translation, speeding up review considerably and saving funds that would otherwise be spent on translation vendors. | Machine translated text | a) Purchased from a vendor | Amazon Web Services | Yes | Machine translated text | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0222 | JAWS (Text-to-Speech Assistant for Accessibility) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | Visually impaired personnel need assistance interpreting pictures and other visual objects and interacting with documents in a non-linear way | Aids visually impaired personnel with documents. | Audio descriptions of images and summaries of text | a) Purchased from a vendor | Freedom Scientific Inc. | No | Audio descriptions of images and summaries of text | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0223 | Trial Presentation Software | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Assists in presenting documents and video in a courtroom setting | More effective courtroom advocacy; decreased time spent assembling presentations | Courtroom presentations | a) Purchased from a vendor | OnCue Technology, LLC | Yes | Courtroom presentations | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0224 | Deposition Transcript Management | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Assists with marking up deposition transcripts and video | More efficient organization, annotation, and display of transcripts and deposition video | Annotated deposition video and transcripts | a) Purchased from a vendor | LexisNexis | No | Annotated deposition video and transcripts | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0225 | Axon Video Retention Solution (VRS) - object recognition and redaction | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The Object Recognition AI capabilities present in the Axon Evidence Redaction suite creates first-pass results for potential desired redactions of individual's faces, license plates and computer terminals. The AI is intended to reduce manual work effort and reduce the time it takes to redact video and audio footage. | This product allows for the protection of identities of Law Enforcement Officers and the public. The only subjects not blurred/redacted are of the target(s) of the arrest. In addition the expected benefits for using the object recognition capability includes reduced USMS personnel time required to produce redacted footage and obviate the need to procure outside redaction services. | Draft video file with USMS-selected desired redaction areas (faces, license plates or computer terminals) Note: Redacted file is not complete until human intervention validates and/or corrects AI suggestions. | a) Purchased from a vendor | Axon Enterprise, Inc. | Yes | Draft video file with USMS-selected desired redaction areas (faces, license plates or computer terminals) Note: Redacted file is not complete until human intervention validates and/or corrects AI suggestions. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0226 | Facial Recognition Technology | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Facial Recognition Technology helps to narrow down potential subjects for further investigative analysis. The AI assists with the possible identification of an investigative subject but the AI in this use case is only an investigative lead and never grounds for law enforcement actions. All leads generated with this AI use must be corroborated with additional law enforcement techniques before actioned. | An increase in investigative efficiency leading to faster apprehension of violent fugitives and sex offenders and more rapid recovery of critically missing children. | Matches query photograph with publicly available images. | a) Purchased from a vendor | Clearview AI | Yes | Matches query photograph with publicly available images. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0227 | JMIS: JARS | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Increases efficiently in movement of prisoners, less manual labor. | cost savings, labor savings | Suggested scheduled prisoner movements based on previous successful movements | b) Developed in-house | Yes | Suggested scheduled prisoner movements based on previous successful movements | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0228 | JMIS: Route Optimizer | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | More accurate optimization of flight schedule | Evaluates over 2000 possible flight options and selects the most optimal flight. Improves efficiency of JPATS through the optimal use of flight assets. | Proposed flight schedule | c) Developed with both contracting and in-house resources | Yes | Proposed flight schedule | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0229 | UiPath OCR activity | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Errors and delays in manual data interpretation and data entry impact critical events in customer journeys and business process efficiency. UiPath states that it is an automation software that has intelligent document creating capabilities to replace manual processes before and after the reading of the flat file. Some of its stated features are extracting text and allowing entire workflows to take place in a single application with one application license. | Automating data extraction from various documents and images, thereby increasing efficiency, accuracy, and speed in processes that involve manual data entry and document processing. | Extract and interpret data from a wide range of document types and formats, including images, PDFs, handwriting, signatures, checkboxes, and tables. It is designed to process documents intelligently, using a combination of rules, templates, and specialized or generative language models. | Extract and interpret data from a wide range of document types and formats, including images, PDFs, handwriting, signatures, checkboxes, and tables. It is designed to process documents intelligently, using a combination of rules, templates, and specialized or generative language models. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0230 | Unmanned Aerial Systems (UAS) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Agentic AI | Collision avoidance | The AI predicts the movement of obstacles and subjects to plot a safe and efficient flight path. This allows the drone to anticipate and smoothly maneuver around objects instead of simply reacting to them. Skydio drones use AI to power their core autonomy features, enabling them to fly themselves safely and intelligently while a human operator focuses on the mission. | Skydio's AI output systems provide automated, real-time data capture and modeling for complex environments by combining advanced onboard AI and computer vision with high-resolution cameras | a) Purchased from a vendor | Skydio | Yes | Skydio's AI output systems provide automated, real-time data capture and modeling for complex environments by combining advanced onboard AI and computer vision with high-resolution cameras | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0231 | Video Transcription Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | USMS uses open source natural language processing technologies and python code to transcribe an audio or video file into plain text. | The product allows analysts to quickly convert a video or audio file to text. | Transcription NLP algorithm | b) Developed in-house | Yes | Transcription NLP algorithm | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0232 | Axon Video Retention Solution (VRS) - Transcription | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The Transcription AI capabilities present in the Axon Evidence Digital Evidence Management System creates first-pass results for transcription of spoken language in video and audio files into text. The AI is intended to reduce manual work effort and reduce the time it takes to transcribe words spoken in video and audio files into typed text. | The expected benefits for using the transcription capability includes reduced USMS personnel time required to manually transcribe words spoken in video and audio files into typed text and obviate the need to procure outside transcription services. | Draft transcribed text associated to video/audio file being transcribed. Note: Transcription is not complete until human intervention validates and/or corrects AI suggestions. | a) Purchased from a vendor | Axon Enterprise, Inc. | Yes | Draft transcribed text associated to video/audio file being transcribed. Note: Transcription is not complete until human intervention validates and/or corrects AI suggestions. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0233 | Open Source Investigative Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | USMS uses this tool for authorized investigative lead generation which will improve efficiencies of public safety and law enforcement missions. | More efficient screening of leads for potential investigative actions. | Recommendation | a) Purchased from a vendor | Vendor proprietary | Yes | Recommendation | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0234 | JPATS Mobile App (Biometrics) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Quicker identification of prisoner record from mobile manifest | Time and labor savings resulting in cost savings | Manifest record of prisoner being moved | c) Developed with both contracting and in-house resources | Rank One Computing | Yes | Manifest record of prisoner being moved | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / USNCB | DOJ-0235 | Aware | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | This commercial tool allows USNCB to ingest and process biometric information shared by domestic and international partners. | Empowers USNCB to automate processing biometric data, improving the speed with which information is shared with partners. | Boarding passes, Ticket changes, Tools for managing changes in travel | a) Purchased from a vendor | Aware Technologies | Yes | Boarding passes, Ticket changes, Tools for managing changes in travel | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / USNCB | DOJ-0236 | Language Weaver | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / USNCB | DOJ-0239 | Thomson Reuters CLEAR | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Access to large data sets used for locating persons of interest. | Timely access to key data in the pursuit of persons of interest | Data | a) Purchased from a vendor | Thomson Reuters | Yes | Data | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / USTP | DOJ-0240 | USTP AI Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Intended purpose is to enable USTP's IT team to understand the process to develop, test, tune and use Microsoft various AI services--Azure Open AI, Azure Open AI Foundry, Microsoft Copilot Studio. A secondary purpose is understanding the full costs and time to develop and implement a use case from start to finish. A tertiary purpose is to ensure the Generative AI service is providing useful responses and value given the cost and time to setup and deploy. | Intended benefits of using Microsoft AI services would be to enable USTP staff to find information quickly on relevant questions they may have. The use cases tested in Pre-Deployment phase would help reduce Help Desk calls and increase productivity of users enabling them to find technical information quickly. Potential to help generate new content based on pilot testing. | Given the outputs range due to the specific AI Assistants objective USTP will provide a couple of examples that are in Pre-Deployment Technical Feasibility testing now: 1). HR Assistant - Trained on USTP's SharePoint Intranet pages to answer common questions about various human resources support issues. 2). Briefing Assistant - Review public documents that were previous submitted USTP briefs on specific bankruptcy cases to easily search and find these cases. | Given the outputs range due to the specific AI Assistants objective USTP will provide a couple of examples that are in Pre-Deployment Technical Feasibility testing now: 1). HR Assistant - Trained on USTP's SharePoint Intranet pages to answer common questions about various human resources support issues. 2). Briefing Assistant - Review public documents that were previous submitted USTP briefs on specific bankruptcy cases to easily search and find these cases. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0242 | Writing assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Quality and consistency of written comments | Faster FBI operations | Text | Text | ||||||||||||||||||||
| Department Of Justice | Department of Justice / PARDON | DOJ-0243 | Voicemail Transcription, Translation and Summarization | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | a) High-impact | High-impact | Generative AI | PARDON would like to leverage transcription, translation and summarization services available in the GCC high cloud environment to help reduce the processing time and level of effort associated with responding to voicemail inquiries. | Cost savings, reducing customer wait times, improving customer experiences, improving PARDON Attorney experiences, and improving multi-lingual access to the government. | The primary output from this use case is an email that provides an AI generated summary of the voicemail and includes the following attachments: the original voicemail (.wav file), a text file that includes the transcribed voicemail, and a text file that includes the translated voicemail (for non-English voicemails). This email is sent to the shared PARDON inbox and processed with other email requests. | a) Purchased from a vendor | Amazon | Yes | The primary output from this use case is an email that provides an AI generated summary of the voicemail and includes the following attachments: the original voicemail (.wav file), a text file that includes the transcribed voicemail, and a text file that includes the translated voicemail (for non-English voicemails). This email is sent to the shared PARDON inbox and processed with other email requests. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / Department wide | DOJ-0244 | Veritone | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Improve transcription and translation of written, video, and audio files | Veritone is used to translate and transcribe non-English language audio for attorney review. This machine translation does not constitute an official record, but is a tool to allow initial review of the audio by the attorney. Veritone is also used to provide English language translations of large sets of documents containing non-English text in order to get an initial idea of the document contents and do not replace official translation of evidence. | For text files that are transcribed into English the output contains the translated text. For Audio and video files a text file transcribing the translation is created as well as a closed captioning file. | a) Purchased from a vendor | Veritone | Yes | For text files that are transcribed into English the output contains the translated text. For Audio and video files a text file transcribing the translation is created as well as a closed captioning file. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0245 | UiPath Document Understanding | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Reduce errors and delays in manual data interpretation and data entry. Potential for significant time savings, additional security and reliability since staff are not working through multiple applications to get the same task done. | Use case helps improve customer experience. It simplifies the processing of complex, unstructured data, expediting decision-making processing, onboarding, and servicing. Automating document processing also reduces the risk of errors. By mitigating the risk of human error, data input errors, missed information, and incorrect procedures are less likely to occur. The result is improved compliance, reduced time people spend on rework, and less losses for the agency. | Extract and interpret data from a wide range of document types and formats, including images, PDFs, handwriting, signatures, checkboxes, and tables. | Extract and interpret data from a wide range of document types and formats, including images, PDFs, handwriting, signatures, checkboxes, and tables. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD | DOJ-0246 | UFMS ChatBot | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Create a personal assistant to provide application-level support to Unified Financial Management System (UFMS) users based on their functional needs/tasks. | Reduce the need for system users to do manual research, reducing subsequent tier 1 help desk requests. | Formatted text response to specific user questions. | Formatted text response to specific user questions. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0247 | Translation tool | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Provide USMS with the tools that can be integrated with USMS and USNCB data stores such as email content and other files for multiple language translated to English and vice versa. | The proposed tools are more cost efficient than other software based solutions. In addition, the tool provides significantly more languages beyond those purchased with previous tools. | Translated text | Translated text | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0248 | Transcription | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Time intensive manual transcription | Faster FBI operations | Form with transcribed text | Form with transcribed text | ||||||||||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0249 | Table of Contents / Table of Authorities Word plugin | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Efficient generation of tables of contents and tables of authorities | Greatly reduces attorney and support staff time spent on generating tables | Tables of contents and authorities in draft briefs | a) Purchased from a vendor | Levit & James, acquired by Litera | No | Tables of contents and authorities in draft briefs | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / CRM | DOJ-0250 | Systran Translate Server | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | CRM uses Systran Translate Server for machine translation. | Translate data from many languages to allow for review and investigation. Systran leverages NLP research with human expertise to train and evaluate models. | Translation | a) Purchased from a vendor | Systran | Yes | Translation | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0251 | System Performance Monitoring | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Improve monitoring of health of IT and litigation support systems. Better anticipation of outages or slowdowns | Fewer and shorter IT outages due to faster responses times and better anticipation of problems | Machine learning text, images, and diagrams | Machine learning text, images, and diagrams | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0252 | Synthetic data generation for software testing | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Synthetic data generation for software testing | Faster FBI operations | test data | test data | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0253 | Symphony AD-Hoc Batch Processing | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Translation, transcription, and summarization tool for language processing | Faster FBI operations | Transcribed and translated language | b) Developed in-house | Yes | Transcribed and translated language | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0254 | Summarizing Inspection Actions and Results for Future Inspectors | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | The Office of Inspection (IN) is leveraging the Business Improvement Section (ACB) to automate a large number of their work processes. | Distilling large data sets, interpreting graphs, writing final reports, and summarizing results is time consuming. An AI prompt can assist in creation of a rough draft in significantly less time as opposed to the time and effort commitment from DEA's human capital resources. Cost savings could be achieved from reduced full-time equivalents spent on generating end user products. In addition, the turnaround time involved with many inspection result findings work processes would decrease. | A rough draft of inspection results would be generated for review and approval. | b) Developed in-house | No | A rough draft of inspection results would be generated for review and approval. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0255 | Summarization Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Summarization | Faster FBI operations | Data summarization | a) Purchased from a vendor | No | Data summarization | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0256 | Stream Processing & Analytics | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Stream processing framework for complex analytics | We have identified a number of open-source tools that we believe could assist us with graphing relationships and other forms of data visualization. | Machine learning text, images, and diagrams | Machine learning text, images, and diagrams | ||||||||||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0257 | Storyblocks | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0258 | Smartphone and tablet operating systems and features | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | DOJ's time sensitive mission benefits from optimizing use of its smart phones and tablets. It also need to update operating systems for cybersecurity features. | Supports DOJ personnel to better serve the American public, especially when not directly utilizing their DOJ-issued computers. | Optimized performance and functionality of approved capabilities on DOJ-issued smart phones or tablet devices. | a) Purchased from a vendor | No | Optimized performance and functionality of approved capabilities on DOJ-issued smart phones or tablet devices. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0259 | Sentiment Analysis Tool | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Human Resources | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Improved marketing of FBI jobs | improved FBI recruitment and hiring | Aggregated trends and patterns | a) Purchased from a vendor | No | Aggregated trends and patterns | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0260 | Search Tool for Prioritization | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Search result prioritization | Faster FBI operations | Prioritized search results | Prioritized search results | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0261 | Search Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Better search results | Saving time resulting in faster FBI response | Search results | a) Purchased from a vendor | AI Service Provider | No | Search results | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0262 | Redaction Tool 2 | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Time-intensive manual tasks | Faster FBI operations | Suggested redactions | Suggested redactions | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0263 | Redaction Tool 1 | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Time-intensive manual tasks | Faster FBI operations | Suggested redactions | c) Developed with both contracting and in-house resources | Yes | Suggested redactions | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / EOIR | DOJ-0264 | Record Digitization | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | a) High-impact | High-impact | Computer Vision | Many agency records, including immigration case files, are paper records. EOIR must digitize paper records into electronic records to comply with federal laws requiring the agency's transition to digital processes. Parties to EOIR immigration proceedings experience delays in accessing case-related information when case records are maintained in paper format. EOIR must make voluminous copies of paper records to respond to records requests or otherwise spend time scanning paper records for digital transmission. EOIR has limited storage space available for paper records. | Transition many components of the case adjudication process to a more efficient, primarily digital process. Improve access to case information for parties to EOIR immigration proceedings. Improve record request and response processes. Eliminate need for costly physical space to store paper records. | Digital agency records of sufficient authenticity, reliability, usability, and integrity to replace the original paper record. | Digital agency records of sufficient authenticity, reliability, usability, and integrity to replace the original paper record. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / Department wide | DOJ-0265 | Public Comment Analysis | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | DOJ strives to better serve the American public. For support timely responses following public comments submissions, especially for regulatory missions, this capability will facilitate efficient processing of duplicate and similar comments, while helping categorize, cite, and map public comments. | This tool improves text analysis and the timeliness of such analysis. Importantly, the tool contains technology to quickly identify, organize, and address high-volume public comments, including letter submissions. It supports the development of dashboards to provide metrics. | Data results, comparisons, and analyses for DOJ personnel to review and assess. | a) Purchased from a vendor | Docketscope | No | Data results, comparisons, and analyses for DOJ personnel to review and assess. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / PRAO | DOJ-0266 | ProLaw | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | ProLaw assists PRAO attorneys in searching a large quantity of prior PRAO inquiries and advice (stored within the database) to identify relevant historical inquiry files that will assist the attorneys in determining how PRAO has advised on similar matters in the past. | The benefits of ProLaw is that it enables PRAO to store digitally, consistent with the component's records retention schedule, all PRAO inquiry files and then quickly search large quantities of inquiry files to identify relevant historical inquiries and advice to a current matter a PRAO attorney is working on. This greatly reduces the amount of time it takes PRAO staff to research which then reduces the wait time of the Department attorney who has requested PRAO advice. Because ProLaw allows PRAO to store records digitally, it also provides government cost savings in the amount of money paid to store hard copy records. | ProLaw's output is information. Specifically, PRAO uses ProLaw to identify all of the digital inquiry files in the database that are consistent with the user's selected search query. | a) Purchased from a vendor | Thomson Reuters | No | ProLaw's output is information. Specifically, PRAO uses ProLaw to identify all of the digital inquiry files in the database that are consistent with the user's selected search query. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0267 | Procurement Data Triage Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | To assist analysts in manual review | More efficient and comprehensive procurement decisions | Data triage | a) Purchased from a vendor | AI Model Provider | No | Data triage | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0268 | Administrative Chatbot | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Time intensive responses to questions | Faster FBI operations | Answers to questions | Answers to questions | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0269 | Policy Chatbot | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Chatbot for policy | Faster FBI operations | Location of user manuals and documentation | a) Purchased from a vendor | AI Model Provider | Yes | Location of user manuals and documentation | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0270 | PLX | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Forensic analysis of data from multiple electronic investigation sources, including mobile phones, computers, and warrant returns, within the context of criminal investigations. | Increases the efficiency of identifying pertinent information within the context of criminal investigations. | Notifications of potential entity matches for review. Link analysis visualizations. | a) Purchased from a vendor | PenLink | No | Notifications of potential entity matches for review. Link analysis visualizations. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0271 | Pega GenAI | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | GenAI is a tool that can help developers generate workflows (code in Pega) faster | Increased developer throughput on the Pega Platform | It generates software that is unique to the Pega program. | It generates software that is unique to the Pega program. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / EOUSA | DOJ-0272 | Palantir | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | Integration and analysis of case information. | Reduction in time required to update and maintain an accurate case management system. | Reports, narratives, and summaries | a) Purchased from a vendor | Palantir | No | Reports, narratives, and summaries | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0273 | Optical Character Recognition Tool 3 | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Procurement & Financial Management | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Automating administrative tasks | Faster FBI operations | Text data | a) Purchased from a vendor | Yes | Text data | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0274 | Optical Character Recognition Tool 1 | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0275 | Optical Character Recognition Tool 2 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Digitization of data | Faster FBI operations | Text | c) Developed with both contracting and in-house resources | No | Text | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0276 | Object Detection Tool 2 | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Data triage | Faster FBI operations | Investigative leads | Investigative leads | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0277 | Object Detection Tool 1 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Identifying if there is a barrier to biometric matching | Better biometric matching | Probability score for presence of a barrier | a) Purchased from a vendor | Yes | Probability score for presence of a barrier | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0278 | NIST Compliance Recommender | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Manual assessments of NIST guidelines | Faster FBI operations | Reports and data | b) Developed in-house | No | Reports and data | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / OPR | DOJ-0279 | NetDocuments | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The AI functionality of NetDocuments consists of predictive suggestions for saving files, such as Word documents and Outlook emails. Through machine learning, NetDocuments predicts the OPR matter into which it thinks the end user should save a specific file. The AI feature is intended to make the process of saving files into the pertinent matter number more efficient and streamlined. | Because NetDocuments is OPR's repository of records, it is important that OPR staff save all matter-related files into NetDocuments associated with the correct OPR matter number. The AI feature in NetDocuments is expected to make the process of saving files into NetDocuments more efficient and user-friendly. That will both encourage end users to save files into NetDocuments and assist in making sure that files are correctly associated with the proper OPR matter numbers. | The AI output from NetDocuments consists of predictive recommendations for the specific OPR matter numbers to which OPR staff should associate files saved into NetDocuments. | a) Purchased from a vendor | Inonde, NetDocuments | Yes | The AI output from NetDocuments consists of predictive recommendations for the specific OPR matter numbers to which OPR staff should associate files saved into NetDocuments. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0280 | Named Entity Recognition | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Entity Extraction | Faster FBI operations | Resolved entities in a searchable index. | b) Developed in-house | Yes | Resolved entities in a searchable index. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0281 | Microsoft Office 365, Teams, and Windows default features | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Finding opportunities to optimally support DOJ personnel through existing Department-wide tools. | These capabilities help DOJ personnel achieve efficiencies through integrated AI assistance across O365 applications in a secure environment, including editorial/grammatical suggestions, data analysis, task automation, and enhanced search. | Improved user experience through qualitative and quantitative suggested improvements, analyses, visualizations. | a) Purchased from a vendor | No | Improved user experience through qualitative and quantitative suggested improvements, analyses, visualizations. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0282 | Link Analysis and Chart Creation from Narrative Summaries | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Generative AI | Linkages are generally identified manually based on human review of unstructured narratives. This is time consuming. | AI batch review of large groups of data sources to identify linkages, permiting staff to focus their review and analysis. | Recommmendation Sample of narrative to identify and create links in correlated data | Recommmendation Sample of narrative to identify and create links in correlated data | |||||||||||||||||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0283 | Knowledge retrieval and synthesis (Azure OpenAI Services) | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Azure OpenAI will help ATR review large volumes of document-based data quicker and more efficiently. | Enhance the speed and accuracy of legal analysis and review. This technology will allow ATR to quickly distill key information and insights, streamline workflows, reduce manual effort, and expedite legal analysis. | Text generation Answers and insights to critical legal questions Legal summaries Legal citation and document references | a) Purchased from a vendor | Microsoft | Yes | Text generation Answers and insights to critical legal questions Legal summaries Legal citation and document references | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | |||||||||||||
| Department Of Justice | Department of Justice / JMD | DOJ-0284 | Internal Finance ChatBot | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Create a personal assistant to provide application-level support to Workiva users based on their functional needs/tasks. | Reduce the need for system users to do manual research, reducing subsequent tier 1 help desk requests. | Formatted text response to specific user questions. | Formatted text response to specific user questions. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / EOIR | DOJ-0285 | Intelligent Workflow Optimization | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Inefficiencies in agency processes, operations, and workflows. | Recommendations and suggestions to improve various aspects of the workflows, operations, and processes supporting EOIR's mission functions. Improve and optimize EOIR's overall performance of its mission functions. | Recommendations for changing workflows, processes, and operations. | Recommendations for changing workflows, processes, and operations. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / EOIR | DOJ-0286 | Immigration Hearing Transcription and Translation/Interpretation Services | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Generative AI | Language translation/interpretation and hearing transcription services for EOIR immigration proceedings are completed manually, requiring significant money and time costs. Current manual processes are slow and labor-intensive and prolong various stages of immigration proceedings. In-person translators have limited availability to attend immigration proceedings. AI-assisted real-time language translation and AI-assisted transcription of hearings can automate parts or whole processes for EOIR language interpretation and hearing transcription services to optimize resources, time, and costs expended by the agency and the public for EOIR immigration proceedings. AI-assisted translation and transcription can automate processes for transcribing audio recordings of immigration hearings into searchable text and interpreting testimony given in a foreign language in real-time for court staff and parties to proceedings. | AI-assisted transcription may reduce or eliminate steps currently needed for the manual transcription process by creating preliminary drafts of hearing transcripts for manual review and verification. AI-assisted language translation can be completed in real-time and reduce the time to complete manual, simultaneous language interpretation during immigration hearings. EOIR personnel and parties to proceedings can conveniently read real-time translations from the convenience of courtroom computers and monitors. The solution needs to allow contracted interpreters to appear via video remotely, which provides a cost-savings compared to in-person interpretation services, and may reduce instances of inadvertently double-booking interpreters or navigating the interpreter’s availability to travel to different hearing locations, all of which makes the immigration adjudication process more efficient. | Real-time text translation of languages into English during immigration hearings. Preliminary draft transcripts of immigration hearings for manual review to complete. | Real-time text translation of languages into English during immigration hearings. Preliminary draft transcripts of immigration hearings for manual review to complete. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / EOIR | DOJ-0287 | Immigration Case Filing Intake and Processing | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Currently all immigration case filings are manually reviewed for the requisite physical quality (legibility, meets formatting requirements, etc.) before the filing is officially accepted or rejected by EOIR personnel, which prolongs the initial intake and processing of case filings. A large portion of initial intake and processing of case filings could be automated with AI tools only requiring manual review for outputs below a defined threshold. | More efficient review of case filings at intake. Ability to reallocate EOIR administrative personnel to assist with other tasks in the immigration adjudication process. | Automated review of case filings, automated acceptance or rejection of case filings, and recommendations to EOIR personnel to manually review case filings for quality as needed. | Automated review of case filings, automated acceptance or rejection of case filings, and recommendations to EOIR personnel to manually review case filings for quality as needed. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / EOIR | DOJ-0288 | Immigration Case and Filing Content Summary | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Generative AI | EOIR legal support staff and adjudicators review voluminous filings in EOIR immigration proceedings, sometimes ranging into hundreds of pages for a single filing, and parties to proceedings do not organize content clearly or at all, which hinders review and processing by EOIR personnel to adjudicate the case. EOIR legal staff organize, review and categorize documentation submitted, and many of these administrative functions could be automated. In addition, EOIR's legal education and training team spends hours reading and summarizing immigration case law to prepare agency trainings and informational materials. | Technological assistance with research so adjudicators and legal support staff may focus their time and attention on utilizing their decision-making skill sets on legal analysis and drawing legal conclusions in an efficient manner. Decrease time and labor required for processing filings, reviewing filings, categorizing cases, and locating relevant content in filings, which improves the efficiency of the immigration proceedings. Decrease time and labor for reading and preparing immigration law trainings and informational materials. | Summaries of court filing contents with references to the source of information within the summary. Tabbing, labeling, and identifying submission types within voluminous court filings. Information pointers to EOIR adjudicators and legal support staff regarding where specific content most relevant to the adjudicator’s inquiry is located within the record. Summaries of immigration case law and other relevant legal authorities. | Summaries of court filing contents with references to the source of information within the summary. Tabbing, labeling, and identifying submission types within voluminous court filings. Information pointers to EOIR adjudicators and legal support staff regarding where specific content most relevant to the adjudicator’s inquiry is located within the record. Summaries of immigration case law and other relevant legal authorities. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0289 | Image processing | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Manual time-intensive image processing | Process automation for faster FBI response | Output of the AI model will be a proposed list of digital image processing steps | Output of the AI model will be a proposed list of digital image processing steps | ||||||||||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0290 | Graph Analytics & Visualization | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Identifies case links and visualizations for complex relationships | Assist with graphing relationships and other forms of data visualization. | Machine learning text, images, and diagrams | Machine learning text, images, and diagrams | ||||||||||||||||||||
| Department Of Justice | Department of Justice / OIG | DOJ-0291 | Grant Risk Assessment Model v3 | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Improved selection of grants to audit | Provide auditors with an additional resource in performing risk assessments that assists in the audit selection process. Allowing auditors to focus work on higher-risk grants can allow for the recovery or redirection of misused government funds and improve auditor effectiveness and efficiency. | Estimated questioned costs and findings for an audit | Estimated questioned costs and findings for an audit | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0292 | Goblin | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / Department wide | DOJ-0293 | Geospatial Tools | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | DOJ personnel need a way to process, map, visualize, and analyze geographic data to protect the American public, further investigations, and facilitate information exchange with federal, state, local, and foreign partners. | This use case enables DOJ to apply advanced AI/ML capabilities to mission-enabling geographic data in order to enhance data mapping, visualization, and integration. | The system can produce a variety of outputs in standard industry formats (e.g., spreadsheet files, maps, analytic files, database tables, and dynamic applications). | a) Purchased from a vendor | ESRI, ArcGIS | Yes | The system can produce a variety of outputs in standard industry formats (e.g., spreadsheet files, maps, analytic files, database tables, and dynamic applications). | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0294 | Generating Recommendations, Outlines, and Summaries | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | DEA needs a way to analyze unstructured and structured data to support providing recommendations, outlines, data classification, summaries, and other business operation support. | To accelerate insights and tool development to advance DEA's business operations and better serve the public. | Outputs may include visualizations and tables that support efficiencies in administrative functions. | Outputs may include visualizations and tables that support efficiencies in administrative functions. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / Department wide | DOJ-0295 | FOIA Production Tools | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Enhances and streamlines the processing of Freedom of Information Act (FOIA) requests. Automates tasks such as document classification, intelligent identification and redaction of sensitive or confidential information, and deduplication of documents. | The AI capabilities enable more accurate and efficient organization and retrieval of FOIA-related documents. Additionally, AI optimizes workflows by automating repetitive tasks and minimizing human error, leading to faster processing times and increased operational efficiency. Overall, the incorporation of AI into FOIAXpress is designed to improve the efficiency, accuracy, and compliance of FOIA request processing, enabling staff to respond more promptly, reduce operational costs, and maintain higher standards of transparency and accountability. | Classifications, recommendations, and predictions. | a) Purchased from a vendor | FOIA Xpress, Forum One, Adobe, and Polydelta | No | Classifications, recommendations, and predictions. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0296 | Fingerprint (Friction Ridge) Optical Character Recognition (OCR) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | DEA, when conducting fingerprint analysis to identify individuals who may be connected to evidence, needs to be able to compare friction ridge prints to other prints within the boundaries of a case. Product enables linking of cases where individuals are not necessarily identified. | This use case saves time and provides information for human decision-making. | Outputs images and portions of print cards. | c) Developed with both contracting and in-house resources | No | Outputs images and portions of print cards. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0297 | Extracting Data from Receipts to Speed Travel Reimbursement or Provide Logbook Documentation | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Gas receipts need to be reconciled / logged into FIRM – the AI would capture the data and can be used to record the receipt (reducing loss of paper) and eventually upload into FIRM reducing human data entry and time. IN also inspects receipts as part of their inspection. The laboratories also audit their OGV logbooks more than once annually. This would be a great pilot to expand into scanning and recording information from other purchases into UFMS and automate that process as well. | Cost savings, live data entry of OGV use (miles and fuel consumption), streamlining of voucher packet creation | The output is a CSV file. | a) Purchased from a vendor | Microsoft | No | The output is a CSV file. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / EOIR | DOJ-0298 | EOIR Adjudicator Notice/Order Writing Assistance | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | a) High-impact | High-impact | Generative AI | To reduce time for EOIR adjudicators to draft notices and orders for immigration cases. After reviewing the facts in the case and conducting a legal analysis of the issues presented, EOIR adjudicators and legal support staff determine their legal conclusions. Technology could be utilized to assist in preparation of a draft document for review by EOIR personnel based on the adjudicator's legal conclusions. | Improvement in the quality of writing (grammar, spelling, clarity, conciseness, etc.), as well as improvements to quality of decisions with more robust citations to the relevant facts in the case and the legal authority used to support the legal conclusions. Reduced time for drafting lengthy orders or notices. Reduced time to complete immigration cases. | Suggested templates for notices. Recommendations for case specific draft orders that include citations to the record and relevant legal authority, with embedded links to allow for efficient review and refinement. | Suggested templates for notices. Recommendations for case specific draft orders that include citations to the record and relevant legal authority, with embedded links to allow for efficient review and refinement. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0299 | Entity Resolution | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Entity Extraction and Resolution | Faster FBI operations | Searchable index of all records associated with distinct individuals. | b) Developed in-house | Yes | Searchable index of all records associated with distinct individuals. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0300 | Entity Extraction and Summarization | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Data triage | Faster FBI operations | Text | Text | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0301 | Enabling eDiscovery Platform AI | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Migrate CORA data to Relativity One or Everlaw, cloud based solutions to enable Relativity's and Everlaw's AI tools for review prioritization, privilege review and other advanced eDiscovery utilities. Utilize internal or third party-tools for AI-assisted collection and data processing, including image and voice recognition and analysis of complex data formats. Integrate E-Discovery tools into case management processes. This initiative transforms litigation support capabilities by leveraging AI to accelerate document review processes, reduce discovery costs, and improve case preparation efficiency and effectiveness. | (1) Increases data availability and accessibility for integration with AI platforms through hosting on a scalable platform, in alignment with DOJ Data Strategy Goal #1 "Enterprise Data Management." Creates replicable enterprise capabilities that other DOJ components can adopt. (2) Allows migration to our existing environments more quickly. Discovery process will be streamlined with increased prioritization of review and lessens the time for manual privilege reviews. (3) AI enabled workflows available to Division litigating teams for privilege review, case strategy and prioritized document review. | Prioritizations, Classifications, Recommendations: document priority rankings, privilege classifications, review recommendations. | Prioritizations, Classifications, Recommendations: document priority rankings, privilege classifications, review recommendations. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0302 | Email Organization Plugin | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | More efficient filing of email | Reduced time spent on administrative record retention requirements | No direct output; sorts and files documents in Outlook and electronic document repositories | No direct output; sorts and files documents in Outlook and electronic document repositories | ||||||||||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0303 | Document Processing | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Use natural language processing to convert spoken audio to text for employees with vision or mobility limitations. | Machine extraction and arrangement of data facilitates review and can allow human reviewers to locate relationships and patterns that would not otherwise be obvious. | Structured data files, including spreadsheets | Structured data files, including spreadsheets | ||||||||||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0304 | Diagram Creation | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Make effective diagrams and illustrations quickly from text prompts for use in briefs and as demonstratives at hearings and trial | More effective advocacy and reduced time in generating effective illustrations | GenAI images and diagrams based on user's input of data | GenAI images and diagrams based on user's input of data | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0305 | Data Triage and Processing | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Transcription, translation, summarization, and object detection in audio and video | Faster FBI operations | Text | a) Purchased from a vendor | AI Service Provider | No | Text | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0306 | Data Triage | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Manual search for data through many reports | Faster FBI operations | Text | Text | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0307 | Data outlier detection | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Validation of data given to FBI by checking for outliers | Better data quality through targeted human review | Potential data outliers for human review | Potential data outliers for human review | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0308 | Data Call Code Assist Tool | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Automating administrative tasks | Increased efficiency and cost savings, and improved team productivity. | Search query terms | c) Developed with both contracting and in-house resources | Yes | Search query terms | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / EOIR | DOJ-0309 | Customer Service AI Agent/ChatBot | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Members of the public visiting EOIR's website have trouble locating information on the website. | Improving access to information on the EOIR website. | Suggest EOIR webpage where customer can locate the relevant content or information. | Suggest EOIR webpage where customer can locate the relevant content or information. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / EOUSA | DOJ-0310 | Conduit AI | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | a) High-impact | High-impact | Generative AI | Platform is meant to provide a generative AI that can be used for document review and other use cases as a comparison point to other commercially available alternatives. | Increased efficiency, reduced costs, and improved customer experience through its conversational AI platform. | Automated transcriptions, metadata tagging suggestions, and object and facial recognition in media files | a) Purchased from a vendor | Conductor | No | Automated transcriptions, metadata tagging suggestions, and object and facial recognition in media files | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0311 | CoHost AI (Podcast hosting service feature) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Completing an entire production hosted and published. | Completing an entire production hosted and published in a fraction of the time it would take to complete it manually. | Podcasts are for public-relations or educational, and not used for LE purposes | a) Purchased from a vendor | Buzz sprout | No | Podcasts are for public-relations or educational, and not used for LE purposes | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / Department wide | DOJ-0312 | Code Development | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | More efficient writing and maintenance of code. | Improve developer productivity and streamline development lifecycle. | Coding guidance and suggestions, with the tool providing real-time code completions based on comments and existing code. | a) Purchased from a vendor | GitHub, Microsoft | No | Coding guidance and suggestions, with the tool providing real-time code completions based on comments and existing code. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / EOUSA | DOJ-0313 | Cocounsel AI | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | a) High-impact | High-impact | Generative AI | Platform is meant to provide a generative AI that can be used for document review and other use cases as a comparison point to other commercially available alternatives. | Increases USAO efficiency through rapid analysis of legal documents, improves accuracy by reducing manual review errors, and assists offices that are not fully staffed by doing more routine tasks and allowing legal professionals to focus on strategic and high-value work | Automated transcriptions, metadata tagging suggestions, and object and facial recognition in media files | a) Purchased from a vendor | Thomson Reuters | No | Automated transcriptions, metadata tagging suggestions, and object and facial recognition in media files | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0314 | Claims Program Predictive Fraud Analytics | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | Deploy advanced AI analytics to detect fraudulent claims and suspicious patterns in compensation claims programs, including the September 11th Victims Compensation Fund, Radiation Exposure Compensation Act, Camp LeJeune Justice Act, and other federal victim assistance programs. This initiative protects program integrity, ensures resources reach legitimate claimants, and maintains public trust in federal compensation systems. Aligns with Administration priorities on combating fraud, protecting taxpayer funds, and ensuring justice for claimants. | (1) Strengthens the Civil Division's capabilities in administering victim compensation programs by identifying potentially fraudulent medical claims, duplicate submissions, and identity fraud. (2) Builds upon existing Civil Division case management systems and medical claim review processes. Leverages ongoing fraud detection initiatives across DOJ components, integrates with established medical record verification systems, and utilizes existing partnerships with healthcare providers and medical review contractors. (3) Improvement in fraudulent claim detection rates, prevention of fraudulent payouts annually, reduction in false positive flags affecting legitimate claimants, faster claim processing times for verified submissions, and improved coordination metrics with investigative agencies on fraud referrals. | Predictions, Classifications, Scores: fraud probability scores, claim authenticity classifications, risk alerts. | Predictions, Classifications, Scores: fraud probability scores, claim authenticity classifications, risk alerts. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / CRT | DOJ-0315 | Civil Rights Public Reporting Portal | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | With limited staff, we use AI to summarize incoming public complaints to assist in determining if a complaint is actionable by our teams. | Lower backlog and faster response to the public. | Generates report summaries and tags on reports for CRT staff analysis. | Generates report summaries and tags on reports for CRT staff analysis. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0316 | Chatbot to Answer Internal Employee Policy Queries | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | a) High-impact | High-impact | Natural Language Processing (NLP) | It is often challenging for DEA employees to manually search through our voluminous collection of manuals, books, CBP chemical codes, CFR, U.S.C., etc. to find an answer to their specific questions about policy, law, and rules. Training the AI on these materials enables it to answer employee queries comprehensively and quickly, thereby saving employees a lot of time. | AI can provide comprehensiveness answers to employees' questions much more quickly than if the employees had to search and find the answers themselves. In addition, the AI can identify content that needs to be revised or added to effectively provide answers. | The solution will enable users to ask questions through a chatbot interface, where the AI system, trained on the Agents Manual, will generate comprehensive answers and recommendations. These responses will be sourced from all relevant materials and include hyperlinks to the original references for easy access and verification. | c) Developed with both contracting and in-house resources | OpenAI | No | The solution will enable users to ask questions through a chatbot interface, where the AI system, trained on the Agents Manual, will generate comprehensive answers and recommendations. These responses will be sourced from all relevant materials and include hyperlinks to the original references for easy access and verification. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0317 | Chatbot | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | General AI assistance | Faster FBI operations | Multimodal output | Multimodal output | ||||||||||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0318 | Case Management System Integration | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Leverage AI capabilities to support case management, such as creating codes for events in cases in order to track their progress and estimating time to completion to help managers evaluate the resource needs of the case. | Reduced administrative overhead and leaner management structure | GenAI text; other precise types of outputs not yet known | GenAI text; other precise types of outputs not yet known | ||||||||||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0319 | Camtasia | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | This allows USMS training content creators to generate on-screen presenters and realistic narration from text for global accessibility. | Enhancing the learners online learning experience by improving accuracy of services, supporting 508 compliance and reduce production time to release of training materials to USMS employees. | Video generation and generated text from speech. | a) Purchased from a vendor | TechSmith | No | Video generation and generated text from speech. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / Department wide | DOJ-0320 | Business Intelligence Tools | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Assists with data discovery and visualization. | The embedded data analytic capabilities will increase the efficiency and effectiveness to locate and analyze data. | Tables, graphs, link-node diagrams, and other visualizations of extracted data. | a) Purchased from a vendor | Tableau, PowerBI | No | Tables, graphs, link-node diagrams, and other visualizations of extracted data. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0321 | Business Form Generation | a) Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Manual time-intensive process | Faster FBI operations | Document | Document | ||||||||||||||||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0322 | Bloomberg (AI Assisted Legal Research) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Provides AI-assisted research to help streamline legal research and document review. | Increases the speed at which attorneys can review and evaluate case law to determine if it is applicable to their current investigations. | Summarization of caselaw | Summarization of caselaw | ||||||||||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0323 | Big Data Visualization Tools | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Improving analytics and support for document review | Open-source tools that we believe could assist us with graphing relationships and other forms of data visualization. | Machine learning text, images, and diagrams | Machine learning text, images, and diagrams | ||||||||||||||||||||
| Department Of Justice | Department of Justice / EOIR | DOJ-0324 | Background Searches | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | To perform background searches to inform an EOIR character and fitness determination for individuals applying to be an EOIR accredited representative. | Timely, comprehensive, and efficient background check. Character and fitness determinations made based on accurate background information. EOIR approves accreditation applicants with the requisite character and fitness. Individuals in EOIR immigration proceedings are assisted by accredited representatives with the requisite character and fitness. | Background check findings. | a) Purchased from a vendor | TransUnion Risk and Alternative Data Solutions, Inc. | Yes | Background check findings. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / EOUSA | DOJ-0325 | Azure AI Foundry Platform | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | This AI solution is meant to provide chatbot responses in a secure enclave. In addition, this platform is meant to provide a generative AI that can be used for document review and other use cases as a comparison point to other commercially available alternatives. | Decreased cost to operate AI compared to other commercially available solutions. | Text outputs | c) Developed with both contracting and in-house resources | Microsoft | No | Text outputs | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0326 | AWS Textract | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | AWS Textract would be used to read the content off of certain PDF's sent into USMS by partner agencies. It will prepopulate screens for the user to review against a mailed in PDF. | Reduce processing time by not having to rekey in information off of text based documents. | OCR/ Scanned Data from PDFs | OCR/ Scanned Data from PDFs | ||||||||||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0327 | AWS Rekognition | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Computer Vision | AWS Rekognition suite would be used to prevent the creation of duplicate records in the USMS Capture System. The application would index faces already in the Capture system then search new entries against the existing database to flag and help determine if a possible existing record exists for a new subject. | The expected benefits to the agency would be higher data quality, lowered risk of duplicate FID creation, and faster intakes. | Possible matches for intake data on existing records that appear as options for decision-making. | Possible matches for intake data on existing records that appear as options for decision-making. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0328 | Audio Clarity Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Audio clarity | Higher quality data | Audio | Audio | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0329 | Audio and Written Transcription and Translation | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | a) High-impact | High-impact | Classical/Predictive Machine Learning | This technology automates the transcription and translation of Spanish and Mandarin Chinese audio files from lawfully seized devices and authorized communications. | The immediate benefits are speed and lower cost, and to enable investigators to quickly identify which parts of the conversations should be reviewed and interpreted by human translators. | Outputs transcription of the original language and the English translation with speaker differentiations including search results of the predetermined relevant terms defined by the analyst. | c) Developed with both contracting and in-house resources | MIT Lincoln Laboratory | No | Outputs transcription of the original language and the English translation with speaker differentiations including search results of the predetermined relevant terms defined by the analyst. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0330 | Audiate | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | This allows USMS training content creators to utilize text-to-speech generation with audio from a wide range of voices and tones without the need of additional actors. | Enhancing the learners online learning experience by improving accuracy of services, supporting 508 compliance and reduce production time to release of training materials to USMS employees | Audio files and generated speech from text. | a) Purchased from a vendor | Techsmith | No | Audio files and generated speech from text. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the Above | No | |||||||||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0331 | ATR Generative Artificial Intelligence Test | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Facilitate access to AI tools by ATR personnel. | This initiative will enable ATR to embrace AI technology responsibly consistent with Presidential Action and various departmental guidance, allowing ATR personnel to optimize and modernize their work processes. | Open-source research, summarizing publicly available documents. | a) Purchased from a vendor | https://www.harvey.ai/ , https://openai.com/ , https://www.perplexity.ai/ | No | Open-source research, summarizing publicly available documents. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0332 | ATR Expert/Consulting with Bates White | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | The AI tool will ingest all the data sets and work to standardize them so that the Bates White expert and team can use the data to run economic models. | Assist in preliminary data processing, including document summarization, name standardization, and text extraction. This includes “cleaning” the data for additional processing, performing the equivalent of a “find and replace” function to standardize names. The Bates White system may use AI to summarize the general content of documents. | The output data may be provided to experts and support staffs working for the State Attorneys General who are cooperating with ATR on investigations and litigations. The output data will be returned to ATR for use by internal ATR economists. | a) Purchased from a vendor | Bates White, using Microsoft Azure | No | The output data may be provided to experts and support staffs working for the State Attorneys General who are cooperating with ATR on investigations and litigations. The output data will be returned to ATR for use by internal ATR economists. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0333 | AI-powered Legal Research | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | DOJ attorneys spend significant time manually researching case law and regulatory precedents across jurisdictions, often missing obscure authorities or evolving standards. This initiative deploys an AI-powered legal research platform that synthesizes case law, flags conflicts or shifts in standards, and provides confidence scores for relevance, automatically updating as new decisions are published. Aligns with E.O. 14179’s innovation directive and M-25-21’s efficiency requirements. | (1) Enhances DOJ's litigation effectiveness by ensuring comprehensive legal research, reducing research time per case, and improving argument quality through better precedent identification. Strengthens government's ability to defend federal programs and policies with more thorough legal foundations. (2) Builds on existing Westlaw/Lexis subscriptions, DOJ brief bank, and PACER databases. Integrates with current legal research workflows and citation management systems. (3) Reduction in research hours per brief; increase in relevant precedents cited; improved appellate success rates; measurable improvement in legal argument comprehensiveness. | Recommendations, Classifications, Scores: research recommendations, precedent relevance scores, conflict alerts. | Recommendations, Classifications, Scores: research recommendations, precedent relevance scores, conflict alerts. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0334 | AI-generated Content Detector | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Federal litigators face increasing challenges with AI-manipulated evidence and documents from opposing parties. An AI tool could be used to flag potentially AI-generated content. | (1) Enhanced litigation integrity by identifying potentially manipulated evidence, protecting court proceedings from AI-generated misinformation and ensuring compliance with local court rules. (2) Builds on existing document review platforms and federal privilege protection protocols. (3) Improved evidence verification accuracy; reduced risk of submitting hallucinated data; enhanced compliance with AI disclosure requirements; increased attorney confidence in document authenticity. | Classifications, Predictions: AI-generation probability scores, document authenticity flags, content manipulation alerts, disclosure requirement notifications. | Classifications, Predictions: AI-generation probability scores, document authenticity flags, content manipulation alerts, disclosure requirement notifications. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0335 | AI-Enabled Workflow Automation | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Office of Immigration Litigation (OIL) faces significant backlogs, with repetitive or frivolous claims slowing progress and foreign-language evidence creating bottlenecks. This initiative uses Natural Language Processing for triage and automate translation to accelerate case processing. It aligns with M-25-21’s innovation and governance principles by improving efficiency while preserving human oversight. | (1) Reduces immigration case backlogs, ensures consistent treatment of claims, and improves responsiveness to federal courts. (2) Builds on OIL case management systems, United States Citizenship and Immigration Services data feeds, and DOJ translation contract costs.(3) Reduced backlog size; fewer attorney hours per case; translation accuracy benchmarks met. Increase in fraudulent/frivolous and repetitive claims. | Classifications, Automated Translations, Recommendations: case priority classifications, language translations, processing recommendations. | Classifications, Automated Translations, Recommendations: case priority classifications, language translations, processing recommendations. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0336 | AI-enabled Legal Argument Harmonization | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Conflicting arguments across DOJ branches can weaken credibility in appellate courts. This initiative deploys AI to mine arguments across briefs, harmonize DOJ positions, and validate citations. It advances M-25-21’s governance requirement for consistent positions and E.O. 14179’s push for efficiency. | (1) Enhances DOJ credibility before the court; avoids conflicting arguments. (2) Uses DOJ appellate brief bank and legal research/citation tools. (3) Reduction in conflicting arguments; % of briefs citation-validated; improved appellate outcomes. | Recommendations, Validations: argument alignment suggestions, citation verification, position harmonization recommendations. | Recommendations, Validations: argument alignment suggestions, citation verification, position harmonization recommendations. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0337 | AI-enabled Compliance Verification | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | Federal agencies and recipients of federal funding must certify compliance with various civil rights statutes, but DOJ lacks efficient methods to verify the accuracy of these certifications. Manual review of compliance documentation is resource-intensive and often occurs only after complaints are filed, allowing violations to persist and potentially expand. False certifications can result in continued federal funding to non-compliant entities, undermining civil rights enforcement and wasting taxpayer resources. Aligns with E.O. 14179's innovation requirements and M-25-21's public trust and governance pillars. | (1) Strengthens DOJ's ability to enforce civil rights laws through the False Claims Act by identifying clear-cut compliance violations earlier in the process. Protects taxpayer funds from flowing to entities that falsely certify compliance while ensuring federal programs achieve their intended civil rights objectives (2) Leverages FCA compliance databases, Civil Rights Division patterns, and initial whistleblower submission channels. (3) Focus on objective, verifiable metrics such as statistical disparities in outcomes, missing required documentation, or contradictions between certifications and published policies. Number of suspicious certifications flagged; investigations initiated; successful FCA settlements or recoveries. | Classifications, Predictions, Recommendations: compliance risk assessments, violation predictions, investigation recommendations. | Classifications, Predictions, Recommendations: compliance risk assessments, violation predictions, investigation recommendations. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0338 | AI-Enabled Briefing Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Generative AI | Federal Programs Branch (FPB) attorneys face overwhelming records and pressure to maintain consistent arguments across circuits. FPB faces immense workloads defending statutes and federal programs, often requiring rapid analysis of massive records and consistent legal arguments across circuits. This initiative will deploy a retrieval-augmented generation (RAG) tool trained on DOJ filings and administrative records to accelerate drafting and ensure consistency. It directly aligns with E.O. 14179 (removing barriers to AI adoption) and OMB M-25-21 (innovation, governance, and public trust). | (1) This initiative strengthens the federal government’s ability to protect statutory authority and defend policy actions across all agencies. (2) Reduction in attorney hours per brief; mitigation of hallucinated citations; measurable improvement in argument consistency across cases. (3) Builds on DOJ’s existing “brief bank,” eDiscovery platforms, and PACER data archives. | Content Generation, Recommendations: draft legal briefs, argument suggestions, citation recommendations, consistency checks. | Content Generation, Recommendations: draft legal briefs, argument suggestions, citation recommendations, consistency checks. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0339 | AI-driven Fraud Detection | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | Fraud diverts billions in taxpayer funds but manual review of records is too slow to catch early misconduct. Fraud cases involve sifting through vast amounts of structured and unstructured data, often too large for manual review to detect early fraud signals. This initiative will apply AI-enabled anomaly detection to efficiently synthesize insights from vast data collections and uncover fraudulent patterns. It aligns with M-25-21’s public trust priority by safeguarding taxpayer funds and E.O. 14179’s innovation directive. | (1) Bolsters DOJ’s mission to prevent waste, fraud, and abuse in taxpayer-funded health programs, strengthening enforcement under the FCA. (2) Builds on existing Medicare/Medicaid data feeds, OIG case frameworks, previous FCA healthcare enforcement analytics, and ongoing interagency fraud task force initiatives.(3) Increase in early identification of false claims; recovery dollars secured; reduced investigation timelines. | Predictions, Classifications, Alerts: fraud risk scores, anomaly alerts, pattern classifications. | Predictions, Classifications, Alerts: fraud risk scores, anomaly alerts, pattern classifications. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0340 | AI-assisted Settlement Data and Risk Analysis | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Settlements require knowledge of both valuation and risk for frequent types of litigation. An AI tool could pull both structured data from settlement databases, together with unstructured settlement memoranda, and analyze settlement risk, valuation, and qualitative factors. Users can use natural language to query one or both sources of data. | (1) Improved settlement decision-making through comprehensive risk and valuation analysis, leading to more favorable outcomes for the government and taxpayers. (2) Builds on existing Salesforce migration initiatives and settlement databases while adding natural language query capabilities. (3) Reduced attorney time per settlement analysis; improved consistency in settlement valuations; better risk assessment accuracy; enhanced ability to identify settlement patterns and trends. | Analytics, Predictions, Recommendations: settlement risk assessments, valuation analyses, pattern identification, natural language query responses from structured and unstructured data. | Analytics, Predictions, Recommendations: settlement risk assessments, valuation analyses, pattern identification, natural language query responses from structured and unstructured data. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0341 | AI-assisted Legacy Code Modernization | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | This initiative uses AI-assisted code translation and refactoring tools to automatically convert legacy code into modern, secure languages (e.g., Java, Python) while flagging logic gaps and optimizing for cloud environments. Aligns with E.O. 14179’s innovation directive and M-25-21’s governance requirements. | (1) Modernized applications improve resilience, reduce security risk, and lower long-term IT O&M costs, directly supporting DOJ’s modernization and cybersecurity priorities. (2) Builds on DOJ CIO modernization roadmaps, Federal IT dashboards, and prior migration initiatives to cloud platforms. (3) Reduction in legacy system maintenance costs, outages and performance loss; # of applications successfully migrated; cybersecurity vulnerabilities reduced. | Code Generation, Recommendations, Predictions: modernized code output, optimization recommendations, vulnerability predictions. | Code Generation, Recommendations, Predictions: modernized code output, optimization recommendations, vulnerability predictions. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / OJP | DOJ-0342 | AI Sandbox for exploration and education on AI | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Help identify enhancement in mission capabilities Explore Risk management Help identify AI Use cases with OJP Educate AI enable workforce | Enhance OJP workforce to work faster Reduce number of software that AI can achieve Improving access to data | Risks assessments Recommendations Decision | Risks assessments Recommendations Decision | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0343 | AI Powered Data Governance | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | This initiative deploys AI-driven data governance and metadata management to auto-tag, catalog, and enforce retention, while identifying duplicate/low-value files. Aligns with M-25-21 governance & E.O. 14179 innovation. | (1) Reduces costs, improves compliance with records management/FOIA, and data governance. Boosts transparency by making DOJ data discoverable and reusable. (2) DOJ records systems, NARA retention schedules, existing FOIA/eDiscovery platforms. (3) Measurable data storage savings; % of files tagged with metadata; improved FOIA response times. | Classifications, Recommendations, Automated Actions: content categorization, retention recommendations, duplicate identification. | Classifications, Recommendations, Automated Actions: content categorization, retention recommendations, duplicate identification. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0344 | AI Personal Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Complex litigation requires intensive time and project management. AI assistants can analyze calendars, emails, case-tracking sheets, and schedules. It can flag deadlines, note high-priority tasks, and suggest productivity techniques for managing complex projects. Attorneys could create additional notifications or project management integrations to improve efficiency. | (1) Improved attorney productivity and case management efficiency, enabling better service delivery to client agencies and more effective litigation outcomes. (2) Builds on existing calendar systems, email platforms, and case-tracking databases while maintaining attorney-client privilege protections. (3) Reduced missed deadlines; improved task prioritization; enhanced productivity metrics; better work-life balance for attorneys; increased case management efficiency. | Recommendations, Alerts: deadline notifications, task prioritization suggestions, productivity optimization recommendations, calendar conflict alerts, project milestone tracking. | Recommendations, Alerts: deadline notifications, task prioritization suggestions, productivity optimization recommendations, calendar conflict alerts, project milestone tracking. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0345 | AI Meeting Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | This initiative deploys AI-enabled meeting assistants that provide real-time transcription, generate concise summaries, identify action items, and tag outcomes to case files. Aligns with E.O. 14179’s innovation goals and M-25-21’s efficiency and transparency pillars. | (1) Increases productivity by ensuring institutional knowledge is captured, searchable, and integrated into case management systems, reducing duplication and oversight risks. (2) Integrates with Microsoft Teams, Outlook, OneNote, and other knowledge management systems. (3) Increase proficiency in % of meetings transcribed and summarized; attorney time saved; # of action items captured and completed. | Transcriptions, Summaries, Extractions: meeting transcripts, summary reports, action item lists, outcome tags. | Transcriptions, Summaries, Extractions: meeting transcripts, summary reports, action item lists, outcome tags. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0346 | Mass Claim Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Components with mass claims, such as RECA must respond to tens of thousands of similar claims within statutorily defined timelines, or the U.S. forfeits its defenses. Tools to quickly process mass claims, including template letter generation, claim categorization, and automated data entry of standardized filings could meet urgent needs. | (1) Ensures statutory compliance for mass claims processing, protecting the government's legal defenses while providing timely relief to eligible claimants. (2) Builds on existing RECA program infrastructure and mass claims databases while incorporating specialized medical data handling and privilege protections. (3) Achievement of statutory processing deadlines; reduced attorney hours per claim; improved consistency in claim categorization; automated template generation; enhanced quality control processes; increased claimant satisfaction through faster processing. | Content Generation, Classifications, Automation: template letters, claim category assignments, automated data entry, standardized filing generation, quality control flags. | Content Generation, Classifications, Automation: template letters, claim category assignments, automated data entry, standardized filing generation, quality control flags. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0347 | AI Evidence and Claim Consolidation | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | This initiative applies AI to synthesize records, summarize expert reports and depositions, and identify duplicate claims. It also uses AI to identify inconsistencies between records and plaintiff claims, identify red flag legal issues, and create templates to respond to frequent or high-volume litigation. It also allows for analysis of settlement and damages databases to identify outlier trends. It supports M-25-21’s public trust pillar and E.O. 14179’s innovation agenda. | 1) Improves the government’s litigation posture in high-value torts, reduces exposure to excessive payouts, and ensures equitable and efficient claims processing.(2) Builds on DOJ medical record review systems and HHS/VA data integration, along with Relativity/CORA settlement, damages, and entitlement databases. (3) Faster evidence review; detection and elimination of duplicate claims; creating more effective and consistent settlements; increased dismissal or settlement of weak claims; attorney time saved. | Summaries, Classifications, Predictions: document summaries, duplicate detection, inconsistency flagging, settlement predictions. | Summaries, Classifications, Predictions: document summaries, duplicate detection, inconsistency flagging, settlement predictions. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / Department wide | DOJ-0348 | AI Cloud Environments | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | FedRAMP authorized environments are used to deploy tools that enable data-driven decision-making. | To support expedited data collaboration and analytics. | Outputs vary by use case. | Outputs vary by use case. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD | DOJ-0349 | AI CLIN Generation | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Determine the number of contract lines needed to meet contract requirements, outlines each line, and populates certain data for review within Unified Financial Management System. | Reduce the time and effort of manual work to create contracts within Unified Financial Management System. | Suggested contract lines, descriptions, and certain system fields for review/approval by users. | Suggested contract lines, descriptions, and certain system fields for review/approval by users. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0350 | AI Chatbot | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Time intensive review of policy content | Faster FBI operations | Text with citations | Text with citations | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0351 | Adobe Premiere | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Seamless editing of videos and speech transcripts. | The videos are for public-relations or educational, and not used for LE purposes. | Provides a quality video for viewing for educational purposes. | a) Purchased from a vendor | Adobe | No | Provides a quality video for viewing for educational purposes. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0352 | Adobe Photoshop | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Generative and image enhancement features, selection and workflow improvements. | Better graphics and adjustments of photos for professional quality results. | Professional quality photos and graphics. Photos are for public-relations or educational, and not used for LE purposes | a) Purchased from a vendor | Adobe | No | Professional quality photos and graphics. Photos are for public-relations or educational, and not used for LE purposes | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0353 | Acquisition Support Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | The problem VAO Ally is intended to solve is the time and resource burden of navigating complex procurement regulations in an office that is already operating lean, even at near full staffing. . Ally quickly summarizes public-domain contracting regulations, simplifying technical language, and providing easy access to relevant rules and resources. This reduces the time staff spend searching for answers, minimizes the risk of misinterpretation, and allows contracting professionals to focus on the judgment-based, legally binding decisions that only they can make. | The expected benefits of VAO Ally are faster access to accurate procurement guidance, reduced administrative burden on contracting professionals, and greater consistency in interpreting acquisition regulations. For the agency, this means improved efficiency in procurement processes, fewer delays caused by staff shortages or vacancies, and better use of limited resources to focus on mission-critical decision making. For the public, the outcome is a procurement workforce that can respond more quickly and effectively to agency needs, ultimately supporting timely delivery of government services and safeguarding taxpayer dollars. | Plain-language summaries, references to regulations, and simplified guidance that users can apply as part of their own professional judgment. The tool may suggest possible resources or interpretations, but the final decision-making authority rests entirely with a warranted Contracting Officer. | a) Purchased from a vendor | Virtual Acquisition Office (VAO) Ally | No | Plain-language summaries, references to regulations, and simplified guidance that users can apply as part of their own professional judgment. The tool may suggest possible resources or interpretations, but the final decision-making authority rests entirely with a warranted Contracting Officer. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0354 | Simulate Regulatory Audits to Train Diversion Investigators and Improve Audit Protocols | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Enables simulations of DEA Diversion audits that can be used to train new Diversion Inspectors while evaluating DEA audit protocols for inconsistencies and vulnerabilities-to generate best practice models. | Ultimately, this will improve the efficiency and quality of regulatory audits, which will increase the number of civil fines that we issue. | Unknown | Unknown | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0355 | Parse and Capture Data Submitted by Laboratories to the NFLIS Program | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Agentic AI | Collecting and entering information from registrants in the National Forensic Laboratory Information System (NFLIS) program takes considerable time and effort and is prone to human error | Cost savings, reduced customer wait times, and improved accuracy of reporting. | Extraction of information into a matrix of rows and columns which links identified substances to individual drug exhibits. | Extraction of information into a matrix of rows and columns which links identified substances to individual drug exhibits. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0356 | Managing Document Digital Signatures | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | It is currently time-consuming, and challenging, to collect digital signatures from DOJ personnel into one source document. Utilizing AI would intend to manage the document workflow by using a single upgraded storage location with advanced security features. | Saves time and effort by integrating identity verification while reducing the amount of storage required for documents. | DEA/DOJ digital cloud base signatures can be integrated with an existing AI platform/application. AI outputs: AI assisted review, automated tagging, custom extractions, agreement summaries and chatbot for user help. | DEA/DOJ digital cloud base signatures can be integrated with an existing AI platform/application. AI outputs: AI assisted review, automated tagging, custom extractions, agreement summaries and chatbot for user help. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0357 | Creating and Maintaining IT Security Packages for Authorizations | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Cybersecurity personnel want to automate the identification of security controls that can have implementation statements, create and maintain security packages for authorizations, keep up with compliance requirements, and more quickly onboard new systems and applications. | Reduces costs while enabling use to maintain compliance and consistency | Generated implementation statements for security packages, reporting, trending, dashboarding | Generated implementation statements for security packages, reporting, trending, dashboarding | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0358 | Community Outreach Chatbot that Helps the Public Consume Prevention Resources | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Agentic AI | Many citizens may not have the time to explore the copious library of DEA Community Outreach and Prevention Support (CPO) resources. By training AI on these resources, citizens can ask plain language questions and get succinct answers. | Increased usage of drug use prevention resources by citizens. Extends the reach of CPO into communities who have a high risk of drug abuse. | 1. Delivery of DEA publications (digital, hard copies for individuals, and bulk copies for organizations / events); 2. Connect users with best content fit; 3. Allow users to request DEA participation in community events; 4. Increase user skills and understanding to prevent substance use through the generation of answers derived from DEA content; 5. Connect users with DEA partners that provide content outside of CPO scope. | 1. Delivery of DEA publications (digital, hard copies for individuals, and bulk copies for organizations / events); 2. Connect users with best content fit; 3. Allow users to request DEA participation in community events; 4. Increase user skills and understanding to prevent substance use through the generation of answers derived from DEA content; 5. Connect users with DEA partners that provide content outside of CPO scope. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0359 | Chatbot to Answer Diversion Registrants' Queries About Registration and Compliance Issues | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | TOMSS (Technical Operations Management Support System will provide call-deflection to ease the customer interaction burden on call center representatives, registration program specialists, and Diversion Investigators. The AI will provide consistent, accurate responses to repetitive queries and replace manual workloads. | 30–40% reduction in repetitive calls and emails. • 15–25% reduction in agent workload. • 100% of chatbot responses sourced from verified DEA policy. • 24/7 access to authoritative self-service guidance for registrants. | • Policy-grounded responses to registrant inquiries (text output). • Deflection metrics and analytics (call volume reduction, FAQ trends). • Context-based prompts or redirects to DEA.gov resources. • Guardrail logic to return “no response” for non-registrant or ungrounded prompts. | • Policy-grounded responses to registrant inquiries (text output). • Deflection metrics and analytics (call volume reduction, FAQ trends). • Context-based prompts or redirects to DEA.gov resources. • Guardrail logic to return “no response” for non-registrant or ungrounded prompts. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0360 | Capture and Convert Structured Data from Scanned Case Documents to Support Advanced Analysis and Trend Forecasting | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Generative AI | The Targeting & Special Projects Unit (DOIT) has identified a critical need to capture, convert, and process both structured and unstructured text from scanned case documents to support advanced analysis and trend forecasting. This initiative will leverage an optical character recognition (OCR) solution capable of extracting both typed and handwritten content from diverse sources, including medical notes, invoices, and statements. Captured text will be transformed into a standardized, machine-readable format (e.g., CSV) and integrated into a relational database. From there, advanced analytical techniques will be applied to reveal hidden structures, patterns, and relationships within the data. By unlocking this information, we aim to enhance our ability to anticipate trends, strengthen investigative strategies, and move toward a more predictive, data-driven approach | Save valuable investigative time that can be used to focus on the results of the analysis. Conducting more comprehensive analysis on the data will improve trend forecasting, strengthen investigative strategies, and support a more predictive, data-driven approach. | The anticipated output for this AI use case will include text for documentation and CSV for excel spreadsheet analytical work. | The anticipated output for this AI use case will include text for documentation and CSV for excel spreadsheet analytical work. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0361 | Auditing Diversion Registrant Inventories of Controlled Substances | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Diversion Investigators regularly conduct accountability audits of Registrants' management of controlled substances in order to detect diversion. Registrants sometimes submit hundreds of pages of invoices, receipts, and logs, which must be reconciled against inventory and movement records to ensure that the audits balance. Diversion Investigators review these and mark them for review if there is non-compliance with regulations, such as missing information or missed signatures. This current workflow relies on manual data entry into spreadsheets, leading to errors, inconsistencies, and excessive investigative time. | Reduces the amount of investigator time per audit by automating the extraction of key fields (e.g., item, quantity, cost, dates) from invoices and auto-populating computation charts. Standardized extraction and reconciliation eliminates human entry errors and decreases audit reconciliation error rates. Real-time flagging of discrepancies (e.g., inventory errors, excess sales, mismatched destruction records, possible fraudulent records) hastens the detection of potential diversion. Frees investigators to focus on high-value investigative activities and field operations, saving thousands in costs. Provides an auditable trail of extraction and reconciliation for internal and external audits. Maintains full audit-ready compliance for all registrants and a tangible audit trail for legal proceedings. | Outputs may be data reports to support Diversion Control Division's mission and better serve the registrant community. | Outputs may be data reports to support Diversion Control Division's mission and better serve the registrant community. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0362 | Analyzing Prescription Monitoring Program Data | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | DEA investigators currently rely on manual manipulation of data in spreadsheets to conduct Prescription Monitoring Program (PMP) analysis. This requires 8-10 hours of labor (data extraction, cleaning, cross referencing, narrative synthesis), and, since investigators analyze data in different ways, results vary. There is little consistency across the agency or even between investigators in terms of what data is examined and there is no easy way to identify patterns such as MMEs, combinations, and early fills. | Increases operational efficiency by automating data preparation, risk identification, and initial narrative generation. Average time spent to produce a PMP decreased from ten to two hours (minus 80%). By employing consistent, data-driven risk analysis, increases the proportion of high-value investigations that result in successful enforcement or administrative outcomes. Reallocates Diversion Investigator effort to higher-value tasks (e.g., field operations, strategic planning), producing a labor cost benefit in the thousands per Diversion Investigator based on average pay and number of PMPs analyzed. Faster identification of prescribing anomalies supports earlier public health interventions and reduces community exposure to diverted pharmaceuticals. A single, centrally managed AI service can be provisioned to all DEA field offices, ensuring uniform analytic standards. | Quantitative data reports and analyses | Quantitative data reports and analyses | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0363 | Analyze Financial Reports to Identify Linkages Across Investigations | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Comprehensive analytical review of reports related to money laundering, specifically identifying commonalities across multiple investigations. | Analysts no longer have to spend time manually collecting data from reports and importing it into PowerBI dashboards as a first step in linking investigations and showing a larger picture of criminal networks. | Quantitative data reports and analyses to inform agents and intel analysts. | Quantitative data reports and analyses to inform agents and intel analysts. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0357 | Creating and Maintaining IT Security Packages for Authorizations | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Cybersecurity personnel want to automate the identification of security controls that can have implementation statements, create and maintain security packages for authorizations, keep up with compliance requirements, and more quickly onboard new systems and applications. | Reduces costs while enabling use to maintain compliance and consistency | Generated implementation statements for security packages, reporting, trending, dashboarding | Generated implementation statements for security packages, reporting, trending, dashboarding | ||||||||||||||||||||
| Department Of Labor | DOL-02 | Language Translation | Pre-deployment | Pre-deployment | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-03 | Audio Transcription | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-04 | Text to Speech Conversion | Pre-deployment | Pre-deployment | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-11 | Electronic Records Management | Pre-deployment | Pre-deployment | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-12 | Call Recording Analysis | Pre-deployment | Pre-deployment | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-13 | Automatic Document Processing | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-15 | Generative AI Assistant (AI Center) | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-19 | Occupational Employment and Wage Statistics (OEWS) Occupation Autocoder | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-20 | Scanner Data Product Classification | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-21 | Expenditure Classification Autocoder | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-22 | PII Redaction | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-23 | Workforce Recruitment Program Website Chatbot Assistant | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-27 | Worker Paid Leave Usage Simulation (Worker PLUS) Microsimulation Program | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-28 | Computer-Assisted Coding: Survey of Occupational Injuries and Illnesses (SOII) Autocoder | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-29 | Census of Fatal Occupational Injuries (CFOI) Record Matching | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-32 | Note Taking Bot | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-35 | Current Population Survey Off-the Clock (CPS OTC) Prediction | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-36 | Sample Refinement: Frame API | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-37 | Consumer Expenditure (CE) Interview Item Code Estimation | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-38 | Consumer Expenditure (CE) Interview Imputations | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-39 | Quarterly Census of Employment and Wages (QCEW) North American Industry Classification System (NAICS) Autocoder | Pilot | Pilot | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-40 | Comment Actionability Likelihood Score | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-41 | Computer-Assisted Review: Occupational Requirements Survey (ORS) Autocoder | Pre-deployment | Pre-deployment | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-42 | Producer Price Index (PPI) Price Tolerance Prediction | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-43 | Employee Benefits Security Administration (EBSA) Case File Summarization | Pre-deployment | Pre-deployment | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-45 | Natural Language Processing (NLP) Tool for Bureau of International Labor Affairs (ILAB) | Pre-deployment | Pre-deployment | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-46 | DAISI (DOL AI Search Insights) | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-47 | Employment and Training Administration (ETA) Grants Monitoring Tool through Doc Explorer | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of State | A/PRI | AI Input in Translation | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Generative AI | Leveraging machine-based tools to streamline workflows in translation work. Machine output is carefully post-edited by professional human translators to ensure highest quality product. | Reduced time, reduced cost, and improved accuracy of translated documents. | Translated text in draft form. | a) Purchased from a vendor | RWS | Yes | Translated text in draft form. | Memory modules | No | k) None of the above | No | |||||||||||||||
| Department Of State | A/SKS | FOIA Web ML Document Indexer | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | A/SKS | AI-Augmented Declassification Review | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The amount of documents, particularly cables and emails, that require declassification review increases exponentially in the next few years. Manual review is unsustainable and expensive given the number of cables (in the hundreds of thousands) and emails (increasing from hundreds of thousands to millions). | The expected benefits and positive outcomes from using AI are cost savings by reducing the need of manual reviews, reduce labor in cable review by up to 80%, reduce the time needed for annual review, and create more consistency in the review process. | The AI system's outputs are binary classification predictions for documents on whether a document should be declassified or exempt from declassification and multiclassification for reasons for exemption. | 02/02/2023 | c) Developed with both contracting and in-house resources | Deloitte | No | The AI system's outputs are binary classification predictions for documents on whether a document should be declassified or exempt from declassification and multiclassification for reasons for exemption. | The data used to train the model are cables from 1995-1999 that have completed manual review with metadata on decisions from manual declassification review. Additional data includes classification/declassification guides and associated glossaries to improve model performance. Performance evaluation is measured by a human Quality Control reviewer. | No | k) None of the above | Yes | a) Yes | The model could incorrectly predict to declassify a document. The model could predict to exempt a document that should have been declassified, reducing public visibility. All exempted documents are reviewed by a human. | d) In-progress | a) Yes, sufficient monitoring protocols have been established | a) Yes, sufficient and periodic training has been established | a) Yes | a) Yes, an appropriate appeal process has been established | a) Direct usability testing | ||||||
| Department Of State | BP | BudgetChat AI Tool | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | Inputting large amounts of data from paper forms into a digital system using AI. | Time and cost savings, improved ability to identify crucial supplemental information available in other budget documents, improved accuracy in identifying key budget information in individual documents. | BudgetChat responds to prompts regarding the amount of spend and positions in prior years by combing through narratives presented to the Bureau of Budget and Planning (BP) from other Department's bureaus, federal agencies, and Congress. | 09/12/2024 | b) Developed in-house | Yes | BudgetChat responds to prompts regarding the amount of spend and positions in prior years by combing through narratives presented to the Bureau of Budget and Planning (BP) from other Department's bureaus, federal agencies, and Congress. | No | k) None of the above | Yes | ||||||||||||||||
| Department Of State | CA | Evaluating Customer Feedback and Sentiments with AI | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Service Delivery | Pilot | c) Not high-impact | Not high-impact | Generative AI | To leverage Natural Language Processing (NLP) and secure Large Language Models (LLM) on unstructured text data to identify actionable insights to drive customer improvement initiatives. | Greater insights about user experiences with consular services and the impact of service changes. | Multiple outputs include categorization, summarization, and analysis of customer feedback. | 07/01/2024 | b) Developed in-house | Yes | Multiple outputs include categorization, summarization, and analysis of customer feedback. | Open-source customer feedback data and other data, including data collected through customer surveys and by in-house researchers. | No | k) None of the above | Yes | |||||||||||||||
| Department Of State | CA | Consular Affairs FaceVACS | a) Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | To automatically check passport photo quality during the Online Passport Renewal (OPR) process, providing instant feedback to ensure submitted images meet requirements. | Instant feedback to ensure submitted images meet requirements. | The output is a decision on whether to accept or reject the applicants digitally submitted biometric face image. Applicants are prompted to retake and upload new photos if needed to meet requirements or submit a physical photo through standard processes if an acceptable digital photo cannot be obtained. | The output is a decision on whether to accept or reject the applicants digitally submitted biometric face image. Applicants are prompted to retake and upload new photos if needed to meet requirements or submit a physical photo through standard processes if an acceptable digital photo cannot be obtained. | ||||||||||||||||||||||
| Department Of State | CA | Travel.State.Gov (TSG) Enhanced Search and Chatbot | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | CA | Travel.state.gov (TSG) Content Refinement with AI Text Editor | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | CA | Predictive Analytics Platform | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | CA | Innovation and Transformation Measurement and Prediction | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | CA | CodeGen - AI-assisted IT Application Development | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | CGFS | Within Grade Increase Data Extraction Automation | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Moving large amounts of data from paper and/or digital forms into a digital system can be challenging, time intensive, and costly. | Lower processing time and resources for cost savings. | Tabulated dataset of extracted values referred for human review. | c) Developed with both contracting and in-house resources | GCP | No | Tabulated dataset of extracted values referred for human review. | Mocked-up forms | No | j) Employment Status i) Income | Yes | |||||||||||||||
| Department Of State | CGFS | DS-5528 Promissory Note Automation | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Moving large amounts of data from paper and/or digital forms into a digital system is challenging, time intensive, and costly. | A reduction in processing time and needed resources to lower costs to complete tasks. | Dataset of extracted information referred for human review. | 12/08/2023 | c) Developed with both contracting and in-house resources | No | Dataset of extracted information referred for human review. | Mocked-up Promissory Notes | Yes | k) None of the above | Yes | |||||||||||||||
| Department Of State | CGFS | StateInsight | a) Pre-deployment The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The need to summarize documents to produce a description of procurement awards and to allow authorized users to ask questions of the documents. | Time and cost savings through efficiency, improved contract management for better outcomes, and more continuity of contractor services. | Summary of documents to produce a description of procurement awards and responses to questions asked about the documents. | Summary of documents to produce a description of procurement awards and responses to questions asked about the documents. | ||||||||||||||||||||||
| Department Of State | CSO | Violence Against Civilians Model | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | DS | User and Entity Behavior Analytics (UEBA) | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | DT | Data.State Analytics and AI Funhouse | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Other | Data scientists require a cybersecure platform for data science experiments, learning, and workflow deployment. Funhouse provides secure and documented access to GenAI and ML tools in an authorized and auditable platform. | Provide cybersecure analytics and AI experimentation to support mission delivery and process efficiency, increased mission efficiency, accelerated AI innovation, and cost savings. | Funhouse is a platform that enables use of general purpose AI and ML tools, including open source, OpenAI, and Azure AI models allowing individual users to tackle a variety of business problems across the Department's mission. | 01/10/2025 | c) Developed with both contracting and in-house resources | Microsoft, Databricks, ZenPoint | Yes | Funhouse is a platform that enables use of general purpose AI and ML tools, including open source, OpenAI, and Azure AI models allowing individual users to tackle a variety of business problems across the Department's mission. | No | k) None of the above | Yes | |||||||||||||||
| Department Of State | DT | FOIA 360 AI Matching Tool | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | DT | Property and Procurement Analytics | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The need to expand analytics within the Integrated Logistics Management System (ILMS) through development of a machine learning model for detecting patterns of potential anomalous activity in property and procurement. | The identification and reduction of anomalous procurement activity in overseas posts. | Detection of anomalous activities within the Integrated Logistics Management System (ILMS). | Detection of anomalous activities within the Integrated Logistics Management System (ILMS). | ||||||||||||||||||||||
| Department Of State | DT | AI Accelerator | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The need to quickly discover answers to common questions about using AI at the Department, locate tools and services, and get help with new AI needs - including guidance and requirements, available tools and solutions, and related processes. | Enhanced user experience to accelerate AI use at the Department, operational efficiencies to free up time for more complex tasks, and scalability to quickly incorporate new knowledge and related tasks. | Textual responses, links to resources, and links to forms to submit information for follow-on action. | Textual responses, links to resources, and links to forms to submit information for follow-on action. | ||||||||||||||||||||||
| Department Of State | DT | AI Research Engine (AIRE) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | The need to improve the quality of reporting from the continuously growing sources of information through the categorization, summarization, and translation of information. Includes formerly titled use case "J Reports Data Collection & Management Tool (DCT)." | AI enables the quick categorization, summarization, and translation of data to make it easily accessible for quicker drafting of higher quality reports, resulting in staff time savings, reduced redundant workload, and better information to support the mission. Significant reduction in time and costs required to create higher quality reports. | Summarized information, translated documents, and sorted data. | c) Developed with both contracting and in-house resources | Deloitte | Yes | Summarized information, translated documents, and sorted data. | No | k) None of the above | No | ||||||||||||||||
| Department Of State | DT | PFCS Proving Ground | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Other | The need for an experimentation platform to promote the development and adoption of AI solutions. | The ability for personnel and teams to experiment with developing innovative AI solutions to address mission delivery or back office processes. Efficiency and effectiveness. | Proofs of concept that can tackle a variety of business problems across the mission, and can be scaled to enterprise solutions. | 05/01/2025 | c) Developed with both contracting and in-house resources | Palantir | Yes | Proofs of concept that can tackle a variety of business problems across the mission, and can be scaled to enterprise solutions. | No | k) None of the above | No | |||||||||||||||
| Department Of State | DT | DT Data Analytics and Assessment (DAA) AI Use Case ITCP Data Harvest | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | DT | POA&M Orchestration | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | DT | StateChat | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | The need to safely and securely access, share, summarize, and research Sensitive But Unclassified (SBU) Department information. Improve and reduce time to translating documents, summarizing reports, and drafting emails. StateChat is the Department's enterprise Generative AI-powered chatbot. | Delivers greater operational efficiency and makes personnel better collaborators and competitors on the front lines of diplomacy, delivering decision advantages, negotiation preparation, and preparedness through simulation. Reduced costs and more consistent documents referencing SBU information. | StateChat is a chatbot interface enriched with tools that generate formatted paper products, offer rapid and transparent searches of internal documents, and allow for research, synthesis, and drafting of documents with reference to cables and other internal information, policies, and processes. | 03/04/2024 | a) Purchased from a vendor | Palantir, OpenAI | Yes | StateChat is a chatbot interface enriched with tools that generate formatted paper products, offer rapid and transparent searches of internal documents, and allow for research, synthesis, and drafting of documents with reference to cables and other internal information, policies, and processes. | N/A - There is no training or fine-tuning of the foundational model, but there is ongoing evaluation of the performance of the foundational model. | Yes | k) None of the above | Yes | ||||||||||||||
| Department Of State | DT | J Reports Data Collection & Management Tool (DCT) | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | F/FAO | Natural Language Processing (NLP) for Foreign Assistance Appropriations Analysis | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Summarizing the key points of a lengthy report using AI. | The NLP application reduces the time needed to extract congressional directives from the annual appropriations bill which ultimately shortens the cycle time for generating the report detailing the annual allocation of U.S. foreign assistance funds to foreign countries and international organizations (i.e. the 653(a) report). | Consolidated congressional directives from the annual appropriations bill to be included in the report detailing the allocation of U.S. foreign assistance funds to foreign countries and international organizations (i.e. the 653(a) report). | 08/05/2021 | c) Developed with both contracting and in-house resources | Guidehouse | Yes | Consolidated congressional directives from the annual appropriations bill to be included in the report detailing the allocation of U.S. foreign assistance funds to foreign countries and international organizations (i.e. the 653(a) report). | Annual appropriations bills | No | k) None of the above | Yes | ||||||||||||||
| Department Of State | F/FAO | ForeignAssistance.gov Processing for Mismatched Data | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | F/FAO | FA.gov PII Picker | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | International Affairs | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The need to streamline a process, improve accuracy, and provide validation checks. The previous process to identify Personally Identifying Information (PII) required additional layers of manual verification. | Additional support to ensure data and privacy protections in public facing databases with less manual time and effort. | The identification of potential PII contained in data submissions. | c) Developed with both contracting and in-house resources | Guidehouse | No | The identification of potential PII contained in data submissions. | The PII Picker fine tuned the spaCy NER model using a custom dataset of PII. This custom dataset was created using the PII that DOS identified while reviewing financial data prior to publication on ForeignAssistance.gov. | Yes | l) Other | Yes | |||||||||||||||
| Department Of State | F/FAO | Integrated Country Strategy (ICS) Turbo | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | International Affairs | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The need to conduct thematic analysis on large volumes of Integrated Country Strategy (ICS) data in a comprehensive and time efficient manner. | Increased capacity of personnel to identify lessons learned and best practices by leveraging historical documentation. This is intended to be accomplished by decreasing the time and effort spent reading, synthesizing, and categorizing the contents of historical Integrated Country Strategies (ICS). | Thematic categories that group ICS sub-objectives written over periods of time across all countries. | 06/11/2025 | c) Developed with both contracting and in-house resources | Guidehouse | No | Thematic categories that group ICS sub-objectives written over periods of time across all countries. | No | k) None of the above | Yes | |||||||||||||||
| Department Of State | F/FAO | FA.gov RedactAid | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | International Affairs | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The need to improve data quality of information and prevent sensitive data from being published to ForeignAssistance.gov. | Reduced effort and improved consistency and accuracy in protecting sensitive information from getting published in public-facing databases. | Sensitive information is identified and flagged prior to ForeignAssistance.gov publication. | c) Developed with both contracting and in-house resources | Guidehouse | No | Sensitive information is identified and flagged prior to ForeignAssistance.gov publication. | The RedactAid model was trained using historical unredacted ForeignAssistance.gov datasets. | No | k) None of the above | Yes | |||||||||||||||
| Department Of State | NFATC | Automatic Detection of Authentic Material | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The need to detect/identify authentic materials in target languages, reducing the time to develop language curricula and tests. | Reduced staff hours and improved variety of materials in foreign languages. | Authentic text, audio, and video in 8 foreign languages. | 11/10/2023 | b) Developed in-house | No | Authentic text, audio, and video in 8 foreign languages. | No | k) None of the above | Yes | ||||||||||||||||
| Department Of State | NFATC | Office of the Historian (OH) Historical Analysis for Negotiations | b) Pilot The use case has been deployed in a limited test or pilot capacity. | International Affairs | Pilot | c) Not high-impact | Not high-impact | Generative AI | The need for historical information in real time to assist with analysis by the Office of the Historian (OH) and decision making. | Negotiators save time and achieve information advantage related to historical country relationships. | Succinct 1-page overviews of research for US negotiators and their assistants. | 07/01/2025 | b) Developed in-house | No | Succinct 1-page overviews of research for US negotiators and their assistants. | References the Foreign Relations of the United States series | No | k) None of the above | No | |||||||||||||||
| Department Of State | NFATC | FSI Enterprise Operations - Gaming and Simulations | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | NFATC | Enhancing Training Effectiveness in FSILearn Using AI | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | NFATC | FSI Continuous Learning Solutions | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | GPA | Digital Media Analytics Platform | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | International Affairs | Deployed | c) Not high-impact | Not high-impact | Generative AI | Using open-source neural machine translation models to translate global media articles and Department and public foreign social media posts into English. | The use case reduces labor in producing media summary reports. | Summaries of large volumes of foreign language news and online posts to help teams identify and understand trends in a more efficient manner. | 03/01/2024 | b) Developed in-house | Yes | Summaries of large volumes of foreign language news and online posts to help teams identify and understand trends in a more efficient manner. | FLORES 200+ is used for evaluating translation models | No | k) None of the above | Yes | |||||||||||||||
| Department Of State | DT | Electronic Health Record AI Enhancements | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Generative AI | Provide efficient, accurate, and comprehensive medical electronic health record (EHR) management, patient care, and administrative workflows by leveraging AI-powered tools, including large language models (LLMs) and natural language processing (NLP). | Improved operational efficiency, reduction of errors, and greater focus on delivering quality patient care. Improved patient data and safety with cost savings and operational efficiencies, transparency, and security. Enhanced clinical decision-making by summarizing patient information, identifying discrepancies, and generating referrals based on historical data and current inputs. | Automated data extraction, validation, sentiment analysis, categorization, identification of document types, data discrepancies, and text summarization into structured medical chart components. | 03/03/2025 | c) Developed with both contracting and in-house resources | Palantir | Yes | Automated data extraction, validation, sentiment analysis, categorization, identification of document types, data discrepancies, and text summarization into structured medical chart components. | MED-defined policy and definitions | Yes | https://www.state.gov/wp-content/uploads/2024/08/MED-PLTR-MED-PIA-for-Public-Facing-Site.pdf | b) Sex c) Age | Yes | a) Yes | https://www.state.gov/wp-content/uploads/2024/08/MED-PLTR-MED-PIA-for-Public-Facing-Site.pdf | Pending AI Impact Assessment | d) In-progress | a) Yes, sufficient monitoring protocols have been established | a) Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | a) Direct usability testing | ||||
| Department Of State | MGT | Rosie Chat Bot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Human Resources | Pilot | c) Not high-impact | Not high-impact | Generative AI | The need for employees to locate consistent, accurate answers to HR-related questions quickly based on internal information. | Cost savings, reduced wait times, improved access to information, enhanced efficiency, scalability, and consistency in communication. | Contextual answers and emails. | 09/02/2025 | b) Developed in-house | Yes | Contextual answers and emails. | No | k) None of the above | No | ||||||||||||||||
| Department Of State | MGT | Utility Invoices Data Extraction | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The need to reconcile and process payments faster. | Reduced time and labor costs through a more efficient and accurate process that automatically identifies key information to issue payments. | Key details from utility invoices, such as the account name, amount due, and consumption for processing payments and for preparing the monthly consumption reports. | 10/04/2023 | b) Developed in-house | No | Key details from utility invoices, such as the account name, amount due, and consumption for processing payments and for preparing the monthly consumption reports. | Historical bills. | No | k) None of the above | No | |||||||||||||||
| Department Of State | MGT | Database for Arrivals | a) Pre-deployment The use case is in a development or acquisition status. | Human Resources | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Automating the extraction of personnel information to create a centralized database for arrivals and departures. | Enhances collaboration and coordination among management sections, streamlining operations. | Centralized list or excel sheet database of personnel information. | Centralized list or excel sheet database of personnel information. | ||||||||||||||||||||||
| Department Of State | MGT | AI SharePoint Chatbot | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The need for efficient access to post-specific information. | Improves productivity, engagement, and access to post-specific information. | Answers to post-specific questions via a chatbot integrated into the Embassy SharePoint site. | Answers to post-specific questions via a chatbot integrated into the Embassy SharePoint site. | ||||||||||||||||||||||
| Department Of State | MGT | Databricks Code Assistant | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Determining potential billing problems for Department mobile plans can be time consuming and prone to errors. | Improve efficiency, save costs, and save time conducting analyses of mobile plans, bills, and usage. | A data usage report. | A data usage report. | ||||||||||||||||||||||
| Department Of State | PM | Natural Language Processing (NLP) to pull key information from unstructured texts | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | WHA | Walter: Generative AI Support Bot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | The need to more effectively focus on critical customer requests with existing resources. | Cost savings and reduced customer wait times by addressing Tier 0 and Tier 1 requests without human intervention. | Responses to Tier 0 and Tier 1 requests via a virtual customer service agent based on current management, policies, directives, and other internal resources. | 03/06/2025 | a) Purchased from a vendor | Microsoft | Yes | Responses to Tier 0 and Tier 1 requests via a virtual customer service agent based on current management, policies, directives, and other internal resources. | Yes | k) None of the above | No | |||||||||||||||
| Department Of State | ECA | ECA Program Management and Outreach - Summarization | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | R/GEC | Storyzy | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | GPA | AI Tools to Enhance PD Workflows | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | CSO | Mass Mobilization Model | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | CSO | Senturion Alpha | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | WHA | WHA/EX Information Management | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | Knowledge Management for administrative and IT related processes. Formerly known as "Low Earth Orbit (LEO) Budget Office Inquiries" to answer questions frequently asked about budget. | Cost savings and reduced customer wait times. | Generative AI or logic-based responses with answers to administrative/IT related FAQs. | 03/03/2025 | c) Developed with both contracting and in-house resources | Microsoft | Yes | Generative AI or logic-based responses with answers to administrative/IT related FAQs. | WHA/EX SharePoint Data | Yes | k) None of the above | No | ||||||||||||||
| Department Of State | NFATC | Creating Persistent Virtual Reality Personas for Dynamic Training | a) Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | There is a need to improve effectiveness of immersive training in the future. The team develops a dynamic integration between multiple AI tools to create stored/persistent "personas" that can be interacted with during training-related virtual reality (VR) exercises. | Natural conversation-based simulations and more effective training scenarios. | Persistent Personas; Speech-to-text API; Language Translation | Persistent Personas; Speech-to-text API; Language Translation | ||||||||||||||||||||||
| Department Of State | CA | Translation of Consular Content using AI | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | DT | TIP Report Research Translation | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Generative AI | The TIP Report Translation AI system provides informal, unofficial translations for materials related to the annual Trafficking in Persons report. This capability saves staff time and allows researchers and drafters to focus on critical report needs, such as identifying and resolving data gaps. | This capability saves staff time and allows researchers and drafters to focus on critical report needs, such as identifying and resolving data gaps. | Informal, unofficial translations of PDFs and Microsoft word documents. Each document includes a watermark denoting that the AI translation is unofficial and must be reviewed by a human. | 11/01/2023 | c) Developed with both contracting and in-house resources | Deloitte, AzureAI | Yes | Informal, unofficial translations of PDFs and Microsoft word documents. Each document includes a watermark denoting that the AI translation is unofficial and must be reviewed by a human. | We do not train nor fine-tune the foundational model, but there is ongoing evaluation of the performance of the foundational model. | No | k) None of the above | No | ||||||||||||||
| Department Of State | DS | Diplomatic Security - Legal Instruction Unit | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | c) Not high-impact | Not high-impact | Generative AI | Insufficient personnel and resources available to create multimedia, conduct necessary research, and develop required curriculum. | Cost savings due to increased efficiency and decreased contracts/personnel to meet the needs of the Department. | The various AI systems will be used to synthesize information and provide recommendations on curriculum content, such as scenario development, multimedia content, and character script recommendations. | 08/01/2025 | c) Developed with both contracting and in-house resources | LexisNexis, ChatGPT | Yes | The various AI systems will be used to synthesize information and provide recommendations on curriculum content, such as scenario development, multimedia content, and character script recommendations. | Publicly available (legal research and related materials). | No | k) None of the above | No | ||||||||||||||
| Department Of State | CA | NIV Adjudication Review Recommendation Engine (ARRE) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | ARRE is used to make post-adjudication managerial quality assurance review time more efficient. The current required managerial review of cases includes a randomly selected portion, which is not optimized to help managers allocate time toward cases where secondary review is most likely to surface documentation gaps, policy-complex adjudications, or process compliance issues. As a result, managers may spend review capacity on routine cases while missing opportunities to identify coaching needs or quality issues within the fixed review window. | The primary purpose of ARRE is to support the post-adjudication, managerial quality assurance review workflow by helping managers efficiently meet existing review requirements through queue ordering and prioritization. This provides more efficient use of manager review time and more consistent post-adjudication oversight by better targeting the fixed, required managerial review effort toward cases with atypical attributes that warrant a second look for quality assurance purposes. The expected benefits are improved review consistency and improved documentation of decision-making. | ARRE produces a unitless anomaly score (01) for each already-adjudicated case and uses the score to generate a post-level ranked list that bins cases into priority tiers (e.g., high/medium/low) for managerial review. Outputs are advisory: managers may override or disregard the prioritization and may review any case consistent with existing authorities and review requirements. The output is used to help order the managerial quality assurance workload; it is not a decision output and is not used as an applicant risk determination. ARRE is used only after an adjudication is complete to support internal managerial oversight and does not serve as a basis for any visa eligibility determination or other binding action affecting the applicant. ARRE does not make, recommend, or change visa issuance/refusal decisions; all adjudicative determinations remain the responsibility of U.S. government officials. | 01/01/2025 | c) Developed with both contracting and in-house resources | Guidehouse | No | ARRE produces a unitless anomaly score (01) for each already-adjudicated case and uses the score to generate a post-level ranked list that bins cases into priority tiers (e.g., high/medium/low) for managerial review. Outputs are advisory: managers may override or disregard the prioritization and may review any case consistent with existing authorities and review requirements. The output is used to help order the managerial quality assurance workload; it is not a decision output and is not used as an applicant risk determination. ARRE is used only after an adjudication is complete to support internal managerial oversight and does not serve as a basis for any visa eligibility determination or other binding action affecting the applicant. ARRE does not make, recommend, or change visa issuance/refusal decisions; all adjudicative determinations remain the responsibility of U.S. government officials. | ARRE was trained on a random sample of NIV application data from 2021 onward drawn from the Consular Consolidated Database. Performance was evaluated through controlled tests and pilot reviews at multiple posts, where managers compared ARRE-selected cases against randomly selected cases and consistently found that ARREs recommendations yielded a higher proportion of cases warranting managerial review. | Yes | b) Sex c) Age | Yes | ||||||||||||||
| Department Of State | CA | Live Consular AI Language Augmentation (LCALA) - Visa Interview Pilot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | a) High-impact | High-impact | Natural Language Processing (NLP) | Language gaps between applicants and non-native-speaker adjudicators create inconsistent, ad-hoc translation during brief (~3-minute) interviews or is limited to questions the adjudicator is familiar with asking, increasing the risk of misunderstanding (dialect/register, pronouns, named entities) and forcing repeat questions, delays, or uneven outcomes. Interpreter availability is limited, and reliance on bilingual staff is not scalable to demand and will impact other consular functions. These constraints reduce interview efficiency, strain officer workload, and can erode customer experience and perceived equity. | More consistent comprehension in ~3-minute interviews; Improved efficiency and throughput; Equity and customer experience; Reduced interpreter burden; Operational resilience | LCALA provides real-time transcription and neural machine translation of spoken exchanges at the interview window, delivering translated audio and on-screen text on the device; when enabled, it can generate a brief time-stamped transcript of the conversation. | a) Purchased from a vendor | Microsoft | No | LCALA provides real-time transcription and neural machine translation of spoken exchanges at the interview window, delivering translated audio and on-screen text on the device; when enabled, it can generate a brief time-stamped transcript of the conversation. | LCALA uses Microsofts vendor-managed Speech-to-Text and Neural Machine Translation models, which are trained and fine-tuned on large, proprietary multilingual speech/text corpora and evaluated with standard metrics (e.g., WER for speech; BLEU/ChrF/COMET with human review for translation). No Department of State audio or transcripts are used to train or fine-tune these models (no-trace processing). For this pilot, the team performs limited operational Quality Assurance (e.g., sampled named-entity accuracy, latency, officer re-ask rates) to evaluate performance in the visa-interview context. | No | k) None of the above | No | |||||||||||||||
| Department Of State | CA | AI Programmatic Insights | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Generative AI | Regional Directors (RD) have to analyze current passport trends and data to determine overall performance and resource requirements to enable successful alignment to overall performance targets; however, the raw data requires analysis and doesn't currently allow for easy input to evaluate different scenarios, which results in RDs having to respond to passport performance with limited information. | An AI-driven solution that empowers Regional Directors to forecast programmatic trends, anticipate demand cycles, and respond effectively to external factors. This solution empowers Regional Directors with data-driven insights to anticipate demand, optimize resources, and proactively address challenges impacting passport agencies. | Monthly snapshots, overview of historical trends, agency-level overview analysis, and agency specific highlights. | b) Developed in-house | No | Monthly snapshots, overview of historical trends, agency-level overview analysis, and agency specific highlights. | Staffing data, Overtime data, and Age Tracker Report data | No | k) None of the above | Yes | ||||||||||||||||
| Department Of State | CA | RegScale | a) Pre-deployment The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI in RegScale is intended to reduce the manual workload, complexity, and latency in compliance management. It automates the extraction and initial drafting of regulatory documentation, identifies control gaps, and provides plain-language explanations of requirements. The goal is to improve efficiency, accuracy, and continuous audit readiness while enabling small teams to scale compliance across multiple regulatory frameworks. | The expected benefits of RegScales AI include reduced compliance costs, faster audit readiness, improved accuracy of regulatory reporting, and more efficient use of agency staff resources. For the general public, this translates into stronger protection of sensitive data, improved transparency, and quicker delivery of secure government services. | The AI system in RegScale retrieves compliance documentation, control gap analyses, plain-language explanations of regulations, policy-to-control mappings, and continuous compliance reports. These outputs are designed to reduce manual workload, accelerate audit readiness, and provide real-time visibility into an agencys compliance posture. | The AI system in RegScale retrieves compliance documentation, control gap analyses, plain-language explanations of regulations, policy-to-control mappings, and continuous compliance reports. These outputs are designed to reduce manual workload, accelerate audit readiness, and provide real-time visibility into an agencys compliance posture. | ||||||||||||||||||||||
| Department Of State | CA | Live Consular AI Language Augmentation (LCALA) - OCS/ACS Pilot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Emergency Management | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | LCALA OCS/ACS Pilot aims to close language gaps in routine, time-sensitive citizen services by providing on-demand interpretation for calls and in-person interactions. Today, limited interpreter availability and uneven reliance on bilingual staff can cause delays, confusion, and repeat contacts when conveying guidance or coordinating with local authorities. LCALA seeks to improve clarity and timeliness of these communications, reducing callbacks and handoffs, while remaining assistive only and not replacing certified interpreters where required. | Faster, clearer OCS/ACS communications; Fewer repeat contacts and escalations; Service equity and accessibility; Staff efficiency; Interagency coordination. | LCALA provides real-time transcription and neural machine translation of spoken exchanges on demand, delivering translated audio and on-screen text on the device; when enabled, it can generate a brief time-stamped transcript of the conversation. | 01/01/2025 | a) Purchased from a vendor | Microsoft | No | LCALA provides real-time transcription and neural machine translation of spoken exchanges on demand, delivering translated audio and on-screen text on the device; when enabled, it can generate a brief time-stamped transcript of the conversation. | LCALA uses Microsofts vendor-managed Speech-to-Text and Neural Machine Translation models, which are trained and fine-tuned on large, proprietary multilingual speech/text corpora and evaluated with standard metrics (e.g., WER for speech; BLEU/ChrF/COMET with human review for translation). No Department of State audio or transcripts are used to train or fine-tune these models (no-trace processing). For this pilot, the team performs limited operational Quality Assurance (e.g., sampled named-entity accuracy, latency, officer re-ask rates) to evaluate performance in the visa-interview context. | No | k) None of the above | No | ||||||||||||||
| Department Of The Interior | ONRR | DOI-0270 | Data Extraction Using MS Power Automate AI Functionality [2024 INV#WO0000000110500] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | ONRR | DOI-0269 | Liable Party Research [2024 INV#WO0000000110496] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | OS | DOI-0268 | I-NEPA System: Leveraging Artificial Intelligence (AI) for Enhanced Efficiency [2024 INV# WO0000000111250] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | OS | DOI-0267 | Public Comment Analysis Tool (PCAT): Leveraging Artificial Intelligence (AI) for Enhanced Efficiency [2024 INV#WO0000000106351] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | NPS | DOI-0266 | Use of AI to Enhance Flash Flood Forecast Tool [2024 Inv#WO0000000110323] | Pre-deployment The use case is in a development or acquisition status. | Emergency Management | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | There is an opportunity to better predict rainfall on a watershed scale in Great Smoky Mountains National Park and provide forecasts of flooding events with a goal of a 24+ hour lead time. | Once we can implement and use the flood forecasting app, we anticipate being able to use it proactively to close at-risk sections of the park during forecast flooding events, saving lives and reducing risks to first responders. | Improved flood forecasts. | FALSE | Improved flood forecasts. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | NPS | DOI-0265 | Bird Nest Detection [2024 Inv#WO0000000110506] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Monitoring of colonial nesting birds with manual photo processing takes substantial time and effort. The goal is to identify bird nests with an object detection model. | Researchers will be able to monitor bird colonies and their populations more efficiently and consistently. | Assessments of active nests for bird species. | FALSE | Assessments of active nests for bird species. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0264 | Remote Sensing Coastal Change - Shoreline Change | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Rapid classification of satellite imagery to determine shoreline change | Quicker, more accurate identification of risks to public safety and infrastructure due to erosion or other shoreline changes. | Predictions of shoreline change | FALSE | Predictions of shoreline change | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BSEE | DOI-0260 | ROV Smart Touch Subsea Pipeline Inspections | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Other | The lack of efficiency and or capability for BSEE and the Oil and Gas industry to inspect bolt failures for underwater pipelines. | Potentially enhance subsea pipeline inspections by integrating advanced robotics and machine learning technologies. | Bolt and flange tightness level prediction | FALSE | Bolt and flange tightness level prediction | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BSEE | DOI-0259 | Well Risk Assessment [2024 Inv# WO0000000108776] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | BSEE | DOI-0258 | Sustained Casing Pressure Identification [2024 Inv# WO0000000108777] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | BSEE | DOI-0257 | Level 1 Survey Report Corrosion Level Classification | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Offshore operators conduct Level 1 surveys annually to report on platform structural integrity, as mandated by 30 CFR 250.901(a)(7), and submit these surveys to BSEE. Each survey includes a corrosion assessment of the platform with accompanying photos. Each area is assigned a coating grade and are key indicators of a platforms overall structural health. Currently, BSEE manually reviews each report to determine if a platform requires further audits, a process that is both time and labor intensive. | Support a more efficient and accurate review of Level 1 Survey photo corrosion levels | The outputs will include a comparison between the original corrosion level assigned to each image in the Level 1 Survey and the corresponding level determined by the machine learning algorithm. | Developed with both contracting and in-house resources | NASA | FALSE | The outputs will include a comparison between the original corrosion level assigned to each image in the Level 1 Survey and the corresponding level determined by the machine learning algorithm. | FALSE | FALSE | ||||||||||||||||
| Department Of The Interior | BSEE | DOI-0256 | Well Activity Report Classification | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Researching the use of Masked Language Models and Convolutional deep neural networks to identify classification systems for significant well event using data from Well Activity Reports. | Enable quicker detection of significant well events to help BSEE personnel mitigate risks and address issues more efficiently. | Decision on what type of significant event a well activity report should report. | Developed with both contracting and in-house resources | FALSE | Decision on what type of significant event a well activity report should report. | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | BSEE | DOI-0255 | Autonomous Drone Inspections | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Other | The Bureau of Safety and Environmental Enforcement within the Department Of Interior, is requesting a trade study of the feasibility of autonomously inspecting off-shore facilities that are non-boardable due to various hazards. These inspections are currently performed by inspectors at a stand off distance from the non-boardable facility on board boats, helicopters, or from land. The distance between the personnel and the facility reduces the quality of inspections that are possible for non-boardable facilities. | Increase the inspection capabilities and efficiency of inspections through the use of small autonomous uncrewed aerial systems (sUAS). | Multiple outputs for inspections, including decisions on corrosion levels, if a platform is boardable, does it have methane leaks. | Developed with both contracting and in-house resources | FALSE | Multiple outputs for inspections, including decisions on corrosion levels, if a platform is boardable, does it have methane leaks. | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | FWS | DOI-0254 | Development of a computer vision model to monitor for early detection of habitat loss across the landscape. [2024 INV#DOI-65] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Other | The U.S. Fish and Wildlife Service is planning to partner with the Chesapeake Conservancy, an NGO, to develop a computer vision model to monitor for early detection of habitat loss across the landscape, a significant threat to biodiversity, including threatened and endangered species. By using a computer vision model one can rapidly identify and flag areas where habitat loss may be occurring due to natural or human-caused disturbances. Early detection can facilitate rapid responses, when appropriate, or allow practitioners to accurately calculate habitat loss over time. More accurate estimates of habitat loss allow for better management decisions and potentially shorter recovery times for threatened and endangered species. | AI-powered computer vision enables early detection of habitat loss, allowing faster, more accurate responses to threats. This supports better conservation decisions and helps protect and recover threatened and endangered species. | AI-powered computer vision enables early detection of habitat loss, allowing faster, more accurate responses to threats. This supports better conservation decisions and helps protect and recover threatened and endangered species. | FALSE | AI-powered computer vision enables early detection of habitat loss, allowing faster, more accurate responses to threats. This supports better conservation decisions and helps protect and recover threatened and endangered species. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | IBC | DOI-0253 | Intelligent Optical Character Recognition [2024 INV#WO0000000110733] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | ONRR | DOI-0252 | Image Generation and Audio Video Editing [2024 Inv#WO0000000110551] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | ONRR | DOI-0251 | Machine Learning Model Optimization[2024 INV#WO0000000110563] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | ONRR | DOI-0250 | Video Creation and Editing[2024 Inv#WO0000000110494] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | ONRR | DOI-0249 | ONRR Video Hosting Platform [OVHP][2024 Inv#WO0000000110488] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | USGS | DOI-0248 | VoiceAtlas no-code chatbot framework | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Agentic AI | Many business areas in USGS could benefit from AI tools, however do not have technical expertise on staff. | VoiceAtlas is an out of the box no code solution for non-technical employees. | Chatbots with guardrails and knowledge bases that can be created and maintained by non-technical employees. | Purchased from a vendor | Navteca | FALSE | Chatbots with guardrails and knowledge bases that can be created and maintained by non-technical employees. | product is being tested with public knowledgbases, such as Library Guide documents | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0247 | Automated Walrus Haulout Monitoring [2024 INV#WO0000000110052] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | provide a framework for using pre-trained image classification convolutional neural network CNN models to make predictions on unlabled image datasets to provide data for further analysis of walrus (Odobenus rosmarus) coastal haulout occupation | time savings, reduce manual image review | determining the presence and absence of walruses at hauout locations via remote camera traps | Developed in-house | FALSE | determining the presence and absence of walruses at hauout locations via remote camera traps | camera trap imagery | https://www.sciencebase.gov/catalog/ | FALSE | None of the Above | FALSE | https://code.usgs.gov/ | |||||||||||||
| Department Of The Interior | USGS | DOI-0246 | ChatGPT to write Python scripts for ArcGIS Pro Maps to be CVD-Friendly [2024 INV#WO0000000107402] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | automate the process of changing colors of planetary geologic map units within ArcGIS pro so that they are all color-vision deficiency friendly | time savings | colors of planetary geologic map units within ArcGIS pro so that they are all color-vision deficiency friendly | Developed in-house | FALSE | colors of planetary geologic map units within ArcGIS pro so that they are all color-vision deficiency friendly | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0245 | Machine Learning approach to predict the composition of seafloor massive sulfide deposits [2024 INV#WO0000000108420] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | predict the composition of seafloor massive sulfide deposits | time and cost savings | publications | Developed in-house | FALSE | publications | geochemical data | FALSE | None of the Above | FALSE | |||||||||||||||
| Department Of The Interior | USGS | DOI-0244 | Harmful Algal Bloom prediction and detection system for Williams Fork Reservoir [2024 INV#WO0000000184339] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We have used remote sensing products to detect harmful algal blooms throughout the Upper Colorado Basin. However, there have been multiple algal blooms in the Williams Fork Reservoir that have remained undetected. | leverage field data collected with satellite overpasses to tease out what may be causing this discrepancy. We want to leverage AI/ML to see if we can build a new or improve existing models to boost the signal in this high-altitude reservoir | potential drivers (wind, nutrients, cloud cover) to potentially predict (based on antecedent conditions) when and where new blooms will occur | Developed in-house | FALSE | potential drivers (wind, nutrients, cloud cover) to potentially predict (based on antecedent conditions) when and where new blooms will occur | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0243 | Google Cloud Vision [2024 INV#WO0000000154445] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Computer Vision | USGS Water Mission Area needs text extraction from publicly available topographic maps as a callable function of an application. | time savings, cost savings | text data extracted from topographic maps | 02/03/2025 | Purchased from a vendor | FALSE | text data extracted from topographic maps | publicly available topographic maps | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0242 | Google Vertex AI Document workbench [2024 INV#WO0000000154393] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Computer Vision | The USGS Energy Resource Program needs to use Google Vertex AI document workbench to perform data rescue on some old paper USGS publication tables with oil and gas production data from the 1940s-1980. This data is not available in digital form. | data rescue | digital data extracted from paper publication - oil and gas production data from 1940s - 1980s | 02/03/2025 | Purchased from a vendor | FALSE | digital data extracted from paper publication - oil and gas production data from 1940s - 1980s | oil and gas production data from 1940s - 1980s | https://pubs.usgs.gov/publication/25181 | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0241 | USGS Azure OpenAI ChatGPT [2024 INV#WO0000000154392] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Various groups within USGS need a generic ChatGPT API to call from IDEs and various applications. | Cost savings, USGS sought to implement its own pay-as-you-go model in the DOI Azure tenant. | TBD | 02/03/2025 | Purchased from a vendor | OpenAI, Microsoft | FALSE | TBD | none, we are using the generic GPT 4.0 model from OpenAI | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0240 | Use modern AI/ML approaches to gain insight into USGS unstructured data such as text [2024 INV#WO0000000131706] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | There are a lot of unstructured data within USGS. Current efforts are to manually extract and analyze that data. We want to apply modern AI/ML approaches such as RAG (Retrieval-Augmented Generation) to gain insights from that unstructured data. | gain insights from that unstructured data | Develop AI/ML approaches such as RAG (Retrieval-Augmented Generation) to gain insights from that unstructured data | Purchased from a vendor | FALSE | Develop AI/ML approaches such as RAG (Retrieval-Augmented Generation) to gain insights from that unstructured data | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0239 | Parsing large quantities of text data [2024 INV#WO0000000113692] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The goal is to assess how news sources discuss water events, how this relates to water availability and vulnerability, and how these patterns vary over time/space. | time savings, without AI this would require reading through thousands of news articles to extract relevant information including mention of a specific the hazard (e.g., drought, flood, HABs), geographic region, organization, and other topical keywords. | identify noteworthy water events in the Upper Colorado River Basin from news articles | Purchased from a vendor | FALSE | identify noteworthy water events in the Upper Colorado River Basin from news articles | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0238 | PAWSC Ecotoxicology PFAS Machine Learning [2024 INV#WO0000000112908] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | assess the ecological health risk of PFAS in Pennsylvania stream surface water | predict potential PFAS exposure effects in unmonitored stream reaches | Leveraging a tailored convolutional neural network (CNN), a validation accuracy of 78% was achieved, directly outperforming traditional methods that were also used, such as logistic regression and gradient boosting (accuracies of 65%) | 12/09/2024 | Developed in-house | FALSE | Leveraging a tailored convolutional neural network (CNN), a validation accuracy of 78% was achieved, directly outperforming traditional methods that were also used, such as logistic regression and gradient boosting (accuracies of 65%) | PFAS concentrations in environmental waters, specifically streams for this model. | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0237 | Prioritized Constituents: Sediment [2024 INV#WO0000000109726] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Regional prediction of suspended sediment concentration in unmonitored rivers to characterize sediment transport in the Delaware, Illinois, and Colorado River Basins. | ability to characterize sediment transport in the Delaware, Illinois, and Colorado River Basins | prediction of suspended sediment concentration in unmonitored rivers | 10/02/2023 | Developed in-house | FALSE | prediction of suspended sediment concentration in unmonitored rivers | Climate, Hydrologic Data, Land Use, Terrain Elevation | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0236 | Avian population estimates from passive acoustic monitoring [2024 INV#WO0000000109725] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Reliable estimates of avian abundance from acoustic recordings | improved estimates of avian abundance | estimates of avian abundance | FALSE | estimates of avian abundance | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0235 | FEMA mixed population flood-frequency analysis [2024 INV#WO0000000109723] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Classification of historical floods based on causal mechanisms to support improved estimation of flood reoccurrence intervals | improved estimation of flood reoccurrence intervals | Classification of historical floods based on causal mechanisms | FALSE | Classification of historical floods based on causal mechanisms | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0234 | Data-Driven Streamflow Drought [2024 INV#WO0000000109714] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | streamflow drought forecasts | ability to provide drought forecasts using data-driven, machine learning approaches for USGS gage locations across the continental U.S. | drought forecasts | 10/03/2022 | Developed in-house | FALSE | drought forecasts | Climate, earth science, land use | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0233 | National-Extent Groundwater Quality Prediction for the National Water Census and Regional Integrated Water Availability Assessments [2024 INV#WO0000000109709] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | provide Nationally consistent predictions of groundwater quality (salinity and nutrients) relevant for human and ecological uses and its influence on surface-water | Nationally consistent predictions of groundwater quality can be integrated into comprehensive water-availability assessments including the National Water Census and regional Integrated Water Availability Assessments | predictions of groundwater quality and integration into comprehensive water-availability assessments including the National Water Census and regional Integrated Water Availability Assessments | 10/01/2021 | Developed in-house | FALSE | predictions of groundwater quality and integration into comprehensive water-availability assessments including the National Water Census and regional Integrated Water Availability Assessments | Earth Science, Land Use, Climate, Water Quality, Population Density | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0232 | Use of artificial intelligence tools for optimization and documentation for computer codes [2024 INV#WO0000000109681] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Generative AI | computer codes are needed that implement earthquake rupture forecasts and ground-motion models. This project uses ChatGPT to suggest optimizations and documentation for computer codes. | time savings | documentation, optimized code | 10/02/2023 | Developed with both contracting and in-house resources | OpenAI | FALSE | documentation, optimized code | computer codes | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0231 | Mapping sagebrush from drones to satellites [2024 INV#WO0000000109568] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | accurate maps of sagebrush are needed to identify seasonal habitats of sage-grouse for the Bureau of Land Management | extend presence modeling to map fractional cover of sagebrush in the Dakotas | accurate maps of sagebrush to identify seasonal habitats of sage-grouse | FALSE | accurate maps of sagebrush to identify seasonal habitats of sage-grouse | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0230 | Delineating sub-surface drainage using satellite imagery [2024 INV#WO0000000109525] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Knowing subsurface drainage (tile-drain) extent is integral to understanding how landscapes respond to precipitation events and subsequent days of drying, as well as how soil characteristics and land management influence stream response. | a time series of tile-drain extent would inform one aspect of land management that complicates our ability to explain streamflow and water-quality as a function of climate variability or conservation management | time series of tile-drain extent | Developed in-house | FALSE | time series of tile-drain extent | Satellite imagery, soils data | FALSE | None of the Above | FALSE | |||||||||||||||
| Department Of The Interior | USGS | DOI-0229 | Vegetation mapping on the Hawaiian island of Lanai [2024 INV#WO0000000109501] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | accurately classify plant species across the Hawaiian island of Lanai, producing detailed maps that can support conservation planning and monitoring of both native and invasive species | accurately classify plant species | detailed maps that support conservation planning and monitoring of both native and invasive species | 03/07/2024 | Developed in-house | FALSE | detailed maps that support conservation planning and monitoring of both native and invasive species | Digital Globe WorldView-2 satellite imagery; airborne imagery collected by EagleView | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0228 | Reinforcement Learning for Helmholtz Coil Operation and Simulation [2024 INV#WO0000000109497] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | optimize performance of its magnetic observatories | reinforcement learning (RL) can significantly aid in the operation of a Helmholtz coil by optimizing its performance in generating uniform magnetic fields | optimized performance in generating uniform magnetic fields | FALSE | optimized performance in generating uniform magnetic fields | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0227 | Population and critical habitat modeling of overwintering monarch butterflies [2024 INV#WO0000000109410] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Monarch butterflies in the western United States overwinter at very specific locations across coastal California. As monarch population decline it become important to identify the characteristics of what makes an overwintering grove a suitable habitat. | Understanding the land cover and climatic factors that influence site selection by monarch can aid land managers in both making decisions to support existing critical habitat, and identify previously unknown locations where monarchs overwinter | Characteristics of what makes an overwintering grove a suitable habitat. | 10/01/2023 | Developed in-house | FALSE | Characteristics of what makes an overwintering grove a suitable habitat. | High resolution land cover data, population abundance data, regional climate data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0226 | Machine Learning algorithm for stream velocity prediction [2024 INV#WO0000000109319] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | time-of-travel web-based application that will allow users to estimate travel times in a spill response scenario with greater accuracy | more accurate predictions of travel times in a spill response scenario | travel time estimates during a spill response scenario | Developed in-house | FALSE | travel time estimates during a spill response scenario | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0225 | Automated otolith aging using image processing [2024 INV#WO0000000109315] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Fisheries managers and researchers often need to know the age of fish for population estimates, stock assessment, and similar projects. Fish otoliths (an ear bone) often accumulated rings annual (similar to tress). | automate this process to see if we can reduce variability across individual agers and automate the aging process of counting otolith rings possibly saving time | automated aging process | 10/01/2023 | Developed in-house | FALSE | automated aging process | Otolith images (pictures) with known ages | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0224 | Machine learning for tsunami source zones [2024 INV#WO0000000109313] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | State of the art tsunami hazard analysis for coastal communities and infrastructure is computationally demanding. | computational efficiency | ML will be used to select the most representative source zones (among thousands of offshore earthquake ruptures) | 10/01/2024 | Developed in-house | FALSE | ML will be used to select the most representative source zones (among thousands of offshore earthquake ruptures) | Offshore fault slip rate data and historical seismicity | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0223 | Oceanographic, coastal, and geomorphic change analysis: data generation, QC/QA, and data management [2024 INV#WO0000000109310] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Machine learning to quantify coastal/marine change across broad scales. QC/QA processes in place to assess data robustness. | Verified data will be used by USGS projects for forecasting trends (ie, shorelines, role of permafrost) in a variety of coastal/marine settings for US coasts. | quantified coastal/marine change across broad scales and verified data for forecasting | 10/01/2024 | Developed in-house | FALSE | quantified coastal/marine change across broad scales and verified data for forecasting | Satellite, aerial and fixed camera imagery. Oceanographic and coastal time series data. | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0222 | Quantifying the effects of land-use change and bioenergy crop production on pollinators, wildlife, and ecosystem services in the Northern Great Plains [2024 INV#WO0000000109305] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Quantifying the effects of land-use change and bioenergy crop production on pollinators, wildlife, and ecosystem services in the Northern Great Plains | Ability to Quantifying the effects of land-use change and bioenergy crop production on pollinators, wildlife, and ecosystem services in the Northern Great Plains | Quantified effects of land-use change and bioenergy crop production on pollinators, wildlife, and ecosystem services in the Northern Great Plains | 10/01/2018 | Developed in-house | FALSE | Quantified effects of land-use change and bioenergy crop production on pollinators, wildlife, and ecosystem services in the Northern Great Plains | UAS images | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0221 | Computationally efficient emulation of spheroidal elastic deformation sources using machine learning [2024 INV#WO0000000109302] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | analytical models are fast but can be inaccurate as they do not correctly satisfy boundary conditions for many geometries, while numerical models are slow and may require specialized expertise and software | we trained supervised machine learning emulators (model surrogates) based on parallel partial Gaussian processes which predict the output of a finite element numerical model with high fidelity | output of a finite element numerical model with high fidelity | 01/02/2023 | Developed in-house | FALSE | output of a finite element numerical model with high fidelity | model output | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0220 | Wildlife species recognition and distance from camera estimation [2024 INV#WO0000000109245] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | need reliable population estimates of animal density | ability to obtain reliable population estimates of animal density | population estimates of animal density | Developed in-house | FALSE | population estimates of animal density | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0219 | Machine Learning to evaluate water quality [2024 INV#WO0000000109241] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Examining the effect of physicochemical and meteorological variables on water quality indicators of harmful algal blooms in a shallow hypereutrophic lake | Better understanding of physicochemical and meteorological variables on water quality indicators of harmful algal blooms in a shallow hypereutrophic lake | Better understanding of physicochemical and meteorological variables on water quality indicators of harmful algal blooms in a shallow hypereutrophic lake | Developed in-house | FALSE | Better understanding of physicochemical and meteorological variables on water quality indicators of harmful algal blooms in a shallow hypereutrophic lake | water-quality data | FALSE | None of the Above | FALSE | |||||||||||||||
| Department Of The Interior | USGS | DOI-0218 | Ecological niche models for bat species [2024 INV#WO0000000109233] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We are trying to understand what environmental factors determine the presence and absence of bat species across their range. | understand what environmental factors determine the presence and absence of bat species across their range | environmental factors that determine the presence and absence of bat species | 01/01/2022 | Developed in-house | FALSE | environmental factors that determine the presence and absence of bat species | bat presence locations, environmental raster data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0217 | Development of a Strategic Framework for Use and Implementation of Machine Learning in Energy Resource Program Workflows [2024 INV#WO0000000109216] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | development of a strategic framework for integrating Energy Resources Program science with traditional information technology related platforms | position the ERP to more effectively deliver its unique data-driven information products | (1) adoption of ML pipelines/models in ERP project workflows; (2) modernization of key ERP data assets through API extension ; and (3) technology transfer, targeted training, and multi-disciplinary career development for existing geospatial ERP workforce | FALSE | (1) adoption of ML pipelines/models in ERP project workflows; (2) modernization of key ERP data assets through API extension ; and (3) technology transfer, targeted training, and multi-disciplinary career development for existing geospatial ERP workforce | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0216 | Quantifying Watershed Controls on Fine Sediment Flux to Lake Tahoe, California/Nevada [2024 INV#WO0000000109215] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | estimate watershed parameters of importance that drive sediment flux | Ability to better estimate watershed parameters of importance that drive sediment flux. | quantified watershed parameters | 10/01/2019 | Developed in-house | FALSE | quantified watershed parameters | Stage and turbidity from NWIS, water balance variables from Western Land Data Assimilation (NASA) land surface model | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0215 | Seismology of Magmatic Injection [2024 INV#WO0000000109214] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | understand the nature and dynamics of seismic sources associated with magmatic injection and magmatic transport | greater understanding of volcanic systems | understand the nature and dynamics of seismic sources associated with magmatic injection and magmatic transport | 10/01/2023 | Developed in-house | FALSE | understand the nature and dynamics of seismic sources associated with magmatic injection and magmatic transport | seismic data collected during the joint USGS/NSF Kilauea Imaging experiment , gravity data, and geodetic data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0214 | Earthquake Catalog Development [2024 INV#WO0000000109208] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | develop more complete and robust earthquake catalogs | volcanic earthquake catalog enhancement using integrated detection, matched-filtering, and relocation tools | more complete and robust earthquake catalogs | 10/01/2021 | Developed in-house | FALSE | more complete and robust earthquake catalogs | Seismic data collected by HVO during a nodal deployment across Pahala, HI | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0213 | Seedling Identification and Percent Growth Analysis [2024 INV#WO0000000109200] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | extraction of alphanumeric labels and analyze seedling growth in petri dish images | saving time and reducing human error | alphanumeric labels from petri dish images | 10/01/2023 | Developed in-house | FALSE | alphanumeric labels from petri dish images | Numerous images of seedlings taken over a span of 5 days. Around 2000 images in total. | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0212 | Gulf Coast Geologic Energy Machine Learning [2024 INV#WO0000000109198] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | predict expected ultimate recovery of shale oil wells | predict total organic carbon | ML model using elemental data to predict total organic carbon | 10/01/2023 | Developed in-house | FALSE | ML model using elemental data to predict total organic carbon | oil and gas well productivity and resource recovery data and recovery decline curves, geological and oil/gas basin data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0211 | Predicting Sparse (Geothermal) Resources Availability by using Machine Learning [2024 INV#WO0000000109195] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | developing new ML metrics for evaluating model performance that work with sparse natural resources, addressing the extreme mathematical sparsity of these resources at the regional scale, and engineering new evidence layers to inform modeling workflows | increasing the explainability, reproducibility, and accessibility of the assessment modeling process | new ML metrics for evaluating model performance | FALSE | new ML metrics for evaluating model performance | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0210 | Using machine learning to detect invasive bullfrogs [2024 INV#WO0000000109159] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Detecting bullfrogs along their invasion front in order to inform removal efforts | rapid detection | identification of invasive bullfrogs | 05/01/2020 | Developed in-house | FALSE | identification of invasive bullfrogs | audio recordings | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0209 | Deep Learning application for automated mapping of surficial landforms, surficial geological deposits, and abandoned mine sites from lidar-derived topography [2024 INV#WO0000000109153] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | mapping of surficial landforms, surficial geological deposits, and abandoned mine sites | automation of mapping | maps of surficial landforms, surficial geological deposits, and abandoned mine sites | 10/01/2024 | Developed in-house | FALSE | maps of surficial landforms, surficial geological deposits, and abandoned mine sites | lidar-derived topography | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0208 | Oil Spill Response for Ice-Covered Rivers [2024 INV#WO0000000109142] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The goal of this DOI Inland Oil Spill Preparedness Program (IOSPP) funded work is to provide rapid, near real-time information to oil spill response crews concerning about the safety of ice-covered areas | provide rapid, near real-time information to oil spill response crews concerning about the safety of ice-covered areas | near real-time information to oil spill response crews concerning about the safety of ice-covered areas | Developed in-house | FALSE | near real-time information to oil spill response crews concerning about the safety of ice-covered areas | unkown | FALSE | None of the Above | FALSE | |||||||||||||||
| Department Of The Interior | USGS | DOI-0207 | Pacific Northwest Stream Flow Permanence [2024 INV#WO0000000109137] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | streamflow classification of perennial versus non-perennial which is the charge of many land steward agencies | inform management decisions that require streamflow classification of perennial versus non-perennial | models used to inform management decisions that require streamflow classification of perennial versus non-perennial | 10/01/2023 | Developed in-house | FALSE | models used to inform management decisions that require streamflow classification of perennial versus non-perennial | unkown | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0206 | SAMPLE Toolbox [2024 INV#WO0000000109123] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | monitoring vegetation | ability for land managers to develop plans for monitoring vegetation | A toolbox for land managers to develop plans for monitoring vegetation | Developed in-house | FALSE | A toolbox for land managers to develop plans for monitoring vegetation | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0205 | Mapping wildfire fuels in previously burned landscapes [2024 INV#WO0000000109121] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | understand how land management treatments affect the probability of reburning | understand how land management treatments affect the probability of reburning | probability of reburning | FALSE | probability of reburning | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0204 | Lava lake thermal pattern classification using self organizing maps and relationships to eruption processes at Kilauea Volcano, Hawaii [2024 INV#WO0000000109098] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | classify lava lake thermal patterns | ability to classify Lava lake thermal patterns from thermal infrared time-lapse imagery | classified lava lake thermal patterns | 10/01/2018 | Developed in-house | FALSE | classified lava lake thermal patterns | infrared time-lapse imagery | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0203 | Advancing image-based surveys to support sea duck conservation along the Pacific Flyway [2024 INV#WO0000000109096] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Safety, expense, observer bias and lack of methodological consistency are rising concerns associated with observer-based surveys, making it imperative to transition to more sustainable methods. | Digital aerial surveys (DAS) that automate counts from aerial imagery using convolutional neural network (CNN) models are one way to improve survey safety and count accuracy. | Standardized DAS for the lower Pacific Flyway to help maximize safety, while improving data consistency and model accuracy among important regions within the Flyway. | FALSE | Standardized DAS for the lower Pacific Flyway to help maximize safety, while improving data consistency and model accuracy among important regions within the Flyway. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0202 | InSAR and other geodetic studies at Volcanoes [2024 INV#WO0000000109093] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | recognize transient signals in combined InSAR and GPS data that may be indications of impending hazardous volcanic activity | help predict hazardous volcanic activity | identification of hazardous volcanic activity | 01/01/2024 | Developed in-house | FALSE | identification of hazardous volcanic activity | ImageNet database, Sentinel 1 InSAR data, GNSS data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0201 | Climate Futures for Lizards and Snakes in Western North America [2024 INV#WO0000000109092] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identifying new management challenges to reptiles based on shifting environmental conditions | ability to identify new management challenges to reptiles based on shifting environmental conditions | management challenges to reptiles based on shifting environmental conditions | Developed in-house | FALSE | management challenges to reptiles based on shifting environmental conditions | point based occurrence, raster elevation data, modeled climate data | FALSE | None of the Above | FALSE | |||||||||||||||
| Department Of The Interior | USGS | DOI-0200 | Predicting inundation dynamics of small forested wetlands [2024 INV#WO0000000109089] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | better understand the wetting/drying dynamics of small wetlands relevant to amphibians | help land managers in the Upper Midwest understand the wetting/drying dynamics of small wetlands relevant to amphibians | wetting/drying dynamics of small wetlands relevant to amphibians | FALSE | wetting/drying dynamics of small wetlands relevant to amphibians | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0199 | Machine-learning model to delineate sub-surface agricultural drainage from satellite imagery [2024 INV#WO0000000109078] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | delineate sub-surface agricultural drainage | ability to delineate sub-surface agricultural drainage from satellite imagery | classification of sub-surface agricultural drainage | 05/11/2023 | Developed in-house | FALSE | classification of sub-surface agricultural drainage | Satellite imagery included acquisition dates from 2008 to 2020. | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0198 | Environmental streamflows in the United States: historical patterns and predictions [2024 INV#WO0000000109075] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | It is important that environmental streamflow assessments by water managers consider changes in climate, land use, and water management; this cannot be done effectively without understanding historical variability and changes in environmental streamflows | Estimates of environmental streamflows for ungaged streams | estimates of environmental streamflows for thousands of ungaged stream reaches | FALSE | estimates of environmental streamflows for thousands of ungaged stream reaches | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0197 | Extracting robust, searchable data from narrative geologic descriptions [2024 INV#WO0000000109022] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Generative AI | Extracting robust, searchable data from narrative geologic descriptions | time savings | searchable geologic description data | 10/01/2024 | Developed in-house | FALSE | searchable geologic description data | Descriptions of geologic units taken from published reports. | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0196 | Classifying GPS data to understand flight behavior of birds [2024 INV#WO0000000109015] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | understand under what circumstances eagles are more likely to collide with wind turbines | better understand circumstances where eagles are more likely to collide with wind turbines | classification of the flight behavior of birds | 11/01/2019 | Developed in-house | FALSE | classification of the flight behavior of birds | animal tracking data - GPS telemetry | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0195 | Whole-lake indexing of round goby abundances with photographic catch data [2024 INV#WO0000000109010] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | quantify abundances of one of the most abundant prey fishes in the Great Lakes, an invasive species called Round Goby | create a more effective method of monitoring abundances of prey fish across the entirety of the Great Lakes | quantified round goby abundances | 02/01/2019 | Developed in-house | FALSE | quantified round goby abundances | Image and position data from autonomous underwater vehicles; LiDAR bathymetry data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0194 | Predicting PFAS in shallow soils in northern New England [2024 INV#WO0000000108973] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | predict PFAS in soils across Maine, New Hampshire, and Vermont | more accurate prediction of PFAS in soils across Maine, New Hampshire, and Vermont | predictions of PFAS in soils across Maine, New Hampshire, and Vermont | 10/01/2023 | Developed in-house | FALSE | predictions of PFAS in soils across Maine, New Hampshire, and Vermont | Shallow soil PFAS data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0193 | Improving accuracy and precision of sonar-based estimates of fish abundance [2024 INV#WO0000000108800] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Sonar-based estimates of fish abundance are prone to inaccuracies that can limit their utility | improved accuracy and precision of USGS's annual prey fish abundance estimates | annual prey fish abundance estimates | 01/01/2023 | Developed in-house | FALSE | annual prey fish abundance estimates | Sonar transect data collected by conventional vessels and uncrewed surface vehicles | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0192 | Machine learning-based landscape feature classification using satellite and airborne imagery [2024 INV#WO0000000108791; 2024 INV#WO0000000108794] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | need to increase the accuracy of habitat and land cover classifications | enhanced accuracy of habitat and land cover classifications | habitat and land cover classifications | 08/01/2013 | Purchased from a vendor | ESRI | FALSE | habitat and land cover classifications | Airborne and Satellite imagery - often with required field-based training data | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0191 | Predicting PFAS occurrence in groundwater using machine learning [2024 INV#WO0000000108780] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | predict PFAS occurrence in groundwater at the depths of drinking water supplies across the conterminous U.S. | better understand the occurrence of PFAS in groundwater | predictions of PFAS occurrence in groundwater at the depths of drinking water supplies across the conterminous U.S. | 10/01/2023 | Developed in-house | FALSE | predictions of PFAS occurrence in groundwater at the depths of drinking water supplies across the conterminous U.S. | Groundwater well data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0190 | Machine Learning Image Classification of Wetlands and Soil moisture [2024 INV#WO0000000108779] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | inform land managers, planners, and researchers about historical and current changes to human and natural environments, focused on floods, droughts, and fires | ability to classify wetlands and soil moisture at large scales | quantification of causal processes behind wildfire | 10/01/2023 | Developed with both contracting and in-house resources | FALSE | quantification of causal processes behind wildfire | Training Samples, Raster Images | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0189 | Zero shot segmentation to expedite Quaternary geologic mapping [2024 INV#WO0000000108739; 2024 INV#WO0000000109150] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The construction of detailed geologic maps requires a lot of manual GIS data input to outline the extent of interpreted geologic features. | expedite the process of creating GIS data for geologic maps | GIS data for geologic maps | FALSE | GIS data for geologic maps | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0188 | Inventorying landforms with convolutional neural networks [2024 INV#WO0000000108738; WO0000000109117] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | efficiently identify and inventory these features landform features from LiDAR derived topographic data images | efficient identification of landform features | inventory of landform features | 02/01/2024 | Developed in-house | FALSE | inventory of landform features | Digital elevation models | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0187 | Tracking wetlands and water movement across watersheds [2024 INV#WO0000000108734] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Accurate prediction of flood and drought impacts requires understanding upstream surface water storage dynamics and storage capacity | classify satellite imagery into open and vegetated water extent, use deep learning algorithms to relate daily river discharge to meteorology and surface water storage dynamics | upstream surface water storage dynamics and storage capacity | FALSE | upstream surface water storage dynamics and storage capacity | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0186 | Everglades-Flux, Digital Surveys [2024 INV#WO0000000108630] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | automatically process Normalized Difference Vegetation Index images and come up with a true value of live vegetation and fill in missing data | automatically process Normalized Difference Vegetation Index images | true value of live vegetation | FALSE | true value of live vegetation | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0185 | Shoreline modeling [2024 INV#WO0000000108297; 2024 INV#WO0000000109312] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | predict shoreline evolution and compare their accuracy to traditional physics-based models | increased accuracy of shoreline evolution | predict shoreline evolution | 10/01/2023 | Developed in-house | FALSE | predict shoreline evolution | shoreline time series data, satellite imagery, oceanographic time series data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | FWS | DOI-0184 | Summarization of documents and output to ECOSphere species workflow [2024 INV#DOI-63] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Other | The ECOSphere species workflow relies on extracting relevant ecological and biological insights from a vast and continuously growing repository of unstructured documents, currently numbering in the millions. Manual review and summarization of these documents is infeasible due to scale, time constraints, and resource limitations. There is a critical need for an AI-driven solution that can automatically ingest, analyze, and summarize large volumes of scientific and technical documents, and seamlessly output structured summaries into the ECOSphere workflow. This will enhance data accessibility, accelerate species-related research, and support timely decision-making in environmental and conservation efforts. | Implementing AI-powered document summarization for the ECOSphere species workflow will significantly enhance operational efficiency by automating the extraction of key insights from millions of unstructured documents. This will reduce manual workload, acc | Structured Summaries of Documents Concise, machine-readable summaries of scientific, regulatory, and technical documents. Key metadata extraction (e.g., species name, habitat, threats, geographic location, publication date). Relevance Scoring AI-generated confidence scores indicating the relevance of each document to specific species or ecological topics. Taxonomic and Thematic Tagging Automatic tagging of documents with species names, ecological terms, and conservation themes to support search and filtering. Workflow-Ready Data Packages Summarized content formatted for direct ingestion into ECOSphere workflows (e.g., JSON, XML, or database-ready formats). Audit Trail and Traceability Links to original documents and AI-generated summaries for transparency and validation. Integration Logs and Metrics Reports on the number of documents processed, summary accuracy, and integration success rates. | FALSE | Structured Summaries of Documents Concise, machine-readable summaries of scientific, regulatory, and technical documents. Key metadata extraction (e.g., species name, habitat, threats, geographic location, publication date). Relevance Scoring AI-generated confidence scores indicating the relevance of each document to specific species or ecological topics. Taxonomic and Thematic Tagging Automatic tagging of documents with species names, ecological terms, and conservation themes to support search and filtering. Workflow-Ready Data Packages Summarized content formatted for direct ingestion into ECOSphere workflows (e.g., JSON, XML, or database-ready formats). Audit Trail and Traceability Links to original documents and AI-generated summaries for transparency and validation. Integration Logs and Metrics Reports on the number of documents processed, summary accuracy, and integration success rates. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0183 | Cell Phone Application for Oil Spill Detection [2024 INV#WO0000000108285] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | develop a model that can be used to interpret cell phone images to predict oil in environmental samples | The tool can be rapidly deployed for use in the field by the oil spill responder community. | prediction of oil in samples | FALSE | prediction of oil in samples | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0182 | Wave runup and total water level observations from time series imagery at several sites with varying nearshore morphologies [2024 INV#WO0000000108262] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Computer Vision | separation (segmentation) of land and water in images | ability to compare actuals to forecasted water levels | calculated water levels | 10/01/2024 | Developed in-house | FALSE | calculated water levels | imagery | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0181 | National Wildlife Disease Database (NWDD) [2024 INV#WO0000000108149; 2024 INV#WO0000000109192] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | bring together various wildlife health data streams across informational domains (i.e., laboratory results, environmental observations, news media, etc.) | visualize and contextualize information from one or more sources | advanced analytics to natural resource authorities | FALSE | advanced analytics to natural resource authorities | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | OS | DOI-0180 | Office of Grants Management (PGM) Grants Utility Tool | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Agentic AI | PGM faced growing operational and compliance challenges across the entire financial assistance lifecycle. Manual processesproject description reviews, pre-award SAM.gov validations, and detailed budget analyseswere extremely labor-intensive, inconsistent across bureaus, and vulnerable to human error. Staff were required to review thousands of records each year, including over 8,000 project descriptions, more than 13,000 pre-award validation actions, and more than 8,000 detailed budget submissions. Each task required extensive reading, cross-checking across multiple systems, and detailed documentation. These demands strained a shrinking grants workforce, delayed internal control reviews, increased the risk of compliance failures under 2 CFR 200, and diverted staff from higher-value oversight activities. The Department needed a standardized, accurate, and scalable way to conduct internal controls testing, ensure timely eligibility checks, and complete budget reviews without overwhelming staff resources or jeopardizing compliance. | Automated analysis increased objectivity, removed inconsistencies in how staff interpreted regulatory requirements, and provided faster, more reliable information to support program decisions. | The combined AI tools automatically generate standardized compliance outputs across project descriptions, budget reviews, and entity validations, replacing thousands of hours of manual analysis. They produce automated scoring, flags for risks or inconsistencies, cross-walks between budget documents, and complete audit-ready records aligned with internal control requirements. Together, these outputs streamline oversight, strengthen regulatory compliance, and create a consistent, defensible documentation trail for more than 29,000 annual financial assistance actions. | 04/10/2024 | Developed in-house | TRUE | The combined AI tools automatically generate standardized compliance outputs across project descriptions, budget reviews, and entity validations, replacing thousands of hours of manual analysis. They produce automated scoring, flags for risks or inconsistencies, cross-walks between budget documents, and complete audit-ready records aligned with internal control requirements. Together, these outputs streamline oversight, strengthen regulatory compliance, and create a consistent, defensible documentation trail for more than 29,000 annual financial assistance actions. | Various public sources | sam.gov | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0179 | Sediment Transport in Coastal Environments [2024 INV#WO0000000108118] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | time-series imputation of oceanographic time-series | filling in missing data | more complete time-series data | 06/01/2025 | Developed in-house | FALSE | more complete time-series data | time-series of oceanographic and water quality variables | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0178 | Machine learning based shoreline time-series imputation, classification and forecasting (time-series analyses) [2024 INV#WO0000000108117] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | data generation and QC/QA procedures, for large scale and short-term forecasting of shoreline trends | time savings | data generation and QC/QA procedures | 06/01/2025 | Developed in-house | FALSE | data generation and QC/QA procedures | shoreline location time-series | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0177 | National Oceanographic Partnership Program (NOPP) [2024 INV#WO0000000108018] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Machine Learning based coastal sediments assessment and prediction | time savings | coastal sediments assessment and prediction | 06/01/2025 | Developed in-house | FALSE | coastal sediments assessment and prediction | aerial imagery and public satellite imagery, wave and tide information | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0176 | Using Machine Learning in USGS StreamStats to make suspended sediment and bedload predictions [2024 INV#WO0000000107977] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Getting estimates of suspended sediment and bedload in Minnesota rivers without sampling data. | A tool for resource managers who need estimates of suspended sediment and bedload in Minnesota rivers without sampling data. | Estimates of suspended sediment and bedload in Minnesota rivers | Developed in-house | FALSE | Estimates of suspended sediment and bedload in Minnesota rivers | Publicly available geospatial and continuous streamflow timeseries datasets. | FALSE | None of the Above | FALSE | |||||||||||||||
| Department Of The Interior | USGS | DOI-0175 | Deep Learning based image segmentation [2024 INV#WO0000000107975] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Machine Learning based shoreline detection and mapping, automated data suitability analyses from satellite imagery | time savings | shoreline mapping | 06/01/2025 | Developed in-house | FALSE | shoreline mapping | public satellite imagery | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | |||||||||
| Department Of The Interior | USGS | DOI-0174 | Data-driven approaches to filling missing time-series data within the San Francisco Bay-Delta [2024 INV#WO0000000107683; INV#WO0000000107488] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Environmental time-series data may suffer from gaps at a variety of time scales, significantly reducing the number of observations to understand phenomena, identify change, calibrate models, and predict future behavior. | filling in missing time-series data | complete time-series data | Developed in-house | FALSE | complete time-series data | FALSE | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | ||||||||||||
| Department Of The Interior | USGS | DOI-0173 | Seabird and Marine Mammal Surveys Near Potential Renewable Energy Sites Offshore Central and Southern California [2024 INV#WO0000000107535] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Using rapidly developing machine- learning (ML) techniques, the USGS WERC team is developing new methods to automate the detection and counts of seabirds and marine mammals from digital imagery. | time and cost savings | seabird counts | Purchased from a vendor | unkown | FALSE | seabird counts | Digital Imagery | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0172 | Foundation Models to Advance Earth Science [2024 INV#WO0000000107153] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Generative AI | Advance the understanding of Earth's conditions and processes by developing and deploying generalist AI models (Foundation Models) trained on Earth Observations from field, suborbital, and orbital sensors. | These models, along with the resulting insights, will empower scientists and land user managers to achieve more while also advancing the broader field of AI and machine learning science. | Information on Earth conditions and processes | 06/01/2024 | Developed in-house | FALSE | Information on Earth conditions and processes | Remotely sensed and other Earth observation data. | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | |||||||||
| Department Of The Interior | USGS | DOI-0171 | Rangeland Condition Monitoring Assessment and Projection (RCMAP) [2024 INV#WO0000000107126] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | To address the need for long-term tracking of vegetation change, scientists from the USGS and Bureau of Land Management (BLM) developed the Rangeland Condition Monitoring Assessment and Projection (RCMAP) project. | RCMAP quantifies the percent cover of ten rangeland components (annual herbaceous, bare ground, herbaceous, litter, non-sagebrush shrub, perennial herbaceous, sagebrush, shrub, and tree cover and shrub height) at yearly time-steps across the western U.S. | RCMAP provides maps of vegetation cover at yearly time-steps, a critical refence to advancing science in the BLM and assessing Landscape Health standards. | Developed in-house | FALSE | RCMAP provides maps of vegetation cover at yearly time-steps, a critical refence to advancing science in the BLM and assessing Landscape Health standards. | field training data, Landsat imagery | FALSE | None of the Above | FALSE | Improved disaster planning process, improved infrastructure planning and development. | In-progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | In-progress | ||||||||
| Department Of The Interior | OCIO | DOI-0170 | Everlaw AI Assistant for Responsive Review | Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Agentic AI | Everlaw AI Assistants Coding Suggestions lets OTAR and DO set natural-language criteria (case, category, and code descriptions) to assess document responsiveness. By tagging documents using keywords, account numbers, and contextual clues, it reduces manual review and speeds document triage | Reduced manual labor cost and cycle time for responsive review; more consistent and rapid identification of responsive documents; meet discovery deadlines; large scale document search; and improved classification for historical accounting. | The AI assistant provides one of four coding suggestions: Yes (direct match), Soft Yes (plausible relevance), Soft No (weak relevance), or No (not relevant). DOI reviewers can filter by category, validate samples, and refine code descriptions. Output is advisory; reviewers decide whether to apply the Responsive code. | c) Developed with both contracting and in-house resources | Everlaw, Inc. | TRUE | The AI assistant provides one of four coding suggestions: Yes (direct match), Soft Yes (plausible relevance), Soft No (weak relevance), or No (not relevant). DOI reviewers can filter by category, validate samples, and refine code descriptions. Output is advisory; reviewers decide whether to apply the Responsive code. | digitized documents such as contracts, account records, correspondence, and financial records | TRUE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0169 | CHS Q Business AI Assistant (theKraken) | Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | The AI Assistant is designed to address the challenge of quickly finding and using information spread across multiple platforms like GitLab and Confluence. Instead of employees spending valuable time searching through scattered documentation, theKraken provides a centralized tool to query, summarize, and generate content directly from CHS resources. This strengthens the agencys mission by enabling staff to focus on high value work rather than time consuming administrative tasks. | The AI Assistant has significantly enhanced the new employee onboarding experience by acting as a personalized guide to our data. It provides a centralized place to ask questions and quickly access information. | Generated natural language responses to user queries. | 04/01/2025 | Developed in-house | TRUE | Generated natural language responses to user queries. | Internal data from GitLab and Confluence | FALSE | None of the Above | FALSE | Potential to predict locations of critical minerals on the seafloor | In-progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | In-progress | |||||||
| Department Of The Interior | USGS | DOI-0168 | Machine Learning for Bat Acoustics | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Other | Improving the accuracy and timeliness of species status assessments and science to support deregulation efforts. | This effort will result in significant cost savings related to environmental review for permitting. | This model will be refined and used for prediction and decisions. | Developed in-house | FALSE | This model will be refined and used for prediction and decisions. | This dataset contains audio files of bat echolocation calls used to develop NABat ML algorithm V1.0, excluding the test (holdout) set. Recordings were collected by monitoring partners across North America using ultrasonic recorders for stationary and mobile surveys, then post-processed to remove noise and assign species labels. Labeling typically involves automated classification followed by manual review (see NABat guides). Files were submitted in WAV format and include 35 classes (34 species + noise), with 4 species excluded for low sample size. From this pool, recordings were randomly selected and split into training and validation sets; the test set is not included. Audio files are grouped by four-letter species codes, with a reference dataset providing Family, Genus, Species, and Common name definitions. | https://www.sciencebase.gov/catalog/item/627ed4b2d34e3bef0c9a2f30 | FALSE | None of the Above | FALSE | https://code.usgs.gov/fort/nabat/nabat-ml/-/tree/v2.0.0 | |||||||||||||
| Department Of The Interior | USGS | DOI-0164 | Mapping ecohydrological headwater refugia | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This application of machine learning is being used to aggregate complex spatial data to develop a statistical model used to create detailed maps of headwater stream resources and habitat. | cut costs by producing more accurate results when mapping headwater stream resources and habitat that can be used to help ensure water security for the nation and healthy aquatic habitat necessary to maintain safe water resources | The supervised machine learning output will include a statistical model that produces raster files of spatial data with habitability scores for each pixel and used to produce high resolution maps of the output. | FALSE | The supervised machine learning output will include a statistical model that produces raster files of spatial data with habitability scores for each pixel and used to produce high resolution maps of the output. | FALSE | None of the Above | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0166 | Machine Learning based shoreline detection and sea ice dynamics using coastal cameras [2024 INV#WO0000000108008] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Determine shoreline location and change as well as sea ice dynamics from coastal cameras in remote communities | Output will be incorporated into Total Water Level and Coastal Change numerical models. | shoreline and sea ice locations | Developed in-house | FALSE | shoreline and sea ice locations | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | In-progress | ||||||||||
| Department Of The Interior | USGS | DOI-0165 | Tsunami Hazard Analysis [2024 INV#WO0000000108470] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Improve onshore probabilistic inundation forecasts | Better/more efficient emergency management and infrastructure planning processes | Inundation maps of coastal areas | Developed in-house | FALSE | Inundation maps of coastal areas | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | In-progress | ||||||||||
| Department Of The Interior | USGS | DOI-0167 | Machine learning for streamflow forecasting [2024 INV#WO0000000109317] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Improve streamflow forecasting predictions in the Willamette River basin. | Improved understanding of the system and predictions. | Forecasted streamflow. | FALSE | Forecasted streamflow. | FALSE | None of the Above | FALSE | Not published yet. | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0163 | Determining the resource potential of critical minerals in seafloor massive sulfide deposits [2024 INV#WO0000000109311] | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The system predicts the location of seafloor massive sulfide deposits for use as critical mineral indicators. Produces associated mapping products for public use. | More efficient searching for seafloor critical minerals - predictions can be used to inform resource expeditions rather than using traditional guesswork planning. | Maps of various mineral locations and compositions on the seafloor. | Developed in-house | FALSE | Maps of various mineral locations and compositions on the seafloor. | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | In-progress | ||||||||||
| Department Of The Interior | USGS | DOI-0161 | use of random forest for species distribution modeling | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We use random forest models in R as part of an ensemble species distribution modelling workflow. Random forests have been applied to model the distribution of Joshua trees, as well as other species in the Mojave Desert in support of the BLM's Mojave Desert Native Plant Program (interagency partnership). Traditional machine learning methods, including random forests and maxent, will continue to feature in our development of species distribution models. | The use of AI in this application improves species distribution models, reduces error, and makes models more precise which provides more accurate and useful information to decision-makers atInterior agencies. | The output of AI from machine learning that we use are predictive and places bounds on species distributions that are then used by regulatory agencies to provide guidance. | Developed in-house | FALSE | The output of AI from machine learning that we use are predictive and places bounds on species distributions that are then used by regulatory agencies to provide guidance. | We used research data sets of Joshua trees that were collected in house and owned by USGS (public) The data have been published through proper procedures | FALSE | https://doi.org/10.5066/P9NZMDLL | None of the Above | FALSE | https://doi.org/10.5066/P9NZMDLL | https://doi.org/10.5066/P9NZMDLL | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | |||||||
| Department Of The Interior | USGS | DOI-0157 | Remote sensing of particulate and filter passing mercury species: models and proxies | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | AI provides the framework for understanding the relationship between optical water quality parameters and non-optical contaminants to develop highly accurate remote sensing models for mercury from satellite images and field measurements | Synoptic, spatially cohesive maps of mercury distribution in surface water | A model that is applied to remote sensing imagery to provide a measurement of mercury and meythlmercury in water bodies | FALSE | A model that is applied to remote sensing imagery to provide a measurement of mercury and meythlmercury in water bodies | FALSE | None of the Above | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0159 | Tephra classification with machine learning [2024 INV#WO0000000109095] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Layers of volcanic ash can be classified using a their geochemical components to link a the ash to the volcano it erupted from. | This model helps probabilistically identify distal tephra layers from throughout the region, helping us better understand its long-term record of explosive volcanism. | Identification of the source volcano of tephra layers in the form of prediction sets and probabilities eg {Katmai: .82, Augustine: 0.15, Redoubt: .03} | 01/01/2023 | Developed in-house | TRUE | Identification of the source volcano of tephra layers in the form of prediction sets and probabilities eg {Katmai: .82, Augustine: 0.15, Redoubt: .03} | Data associated with this data release: https://dggs.alaska.gov/pubs/id/31091 | https://dggs.alaska.gov/pubs/id/31091 | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0158 | HOTLink: identifying elevated thermal anomalies at volcanoes [2024 INV#WO0000000109217] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Moves beyond threshold-based hotspot detection algorithms using computer vision and CNN for improved detection of weka thermal signals that may be the first indiciation of volcanic unrest | Detects 22% more hotspots at volcanoes and produces 12% fewer false positivies across multiple satellite sensor platforms to accurately characterize background and eruptive periods, as well as providing probabilistic measures of detection confidence. | Hotspot detections with probabilities, quantitative time series of volcanic radiative power | 06/09/2025 | Developed in-house | TRUE | Hotspot detections with probabilities, quantitative time series of volcanic radiative power | Satellite thermal images | FALSE | None of the Above | FALSE | https://github.com/csaundersshultz/HotLINK | |||||||||||||
| Department Of The Interior | USGS | DOI-0154 | An integrated sensor network and data driven approach to satellite remote sensing of Dissolved Organic Matter | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | AI leverages the very large water quality dataset from extensive existing in situ data at continuous monitoring stations to develop a remote sensing model with improved accuracy for DOM estimates from satellite images | Improved monitoring of water quality | A model that is applied to remote sensing imagery to provide a measurement of DOM in water bodies | FALSE | A model that is applied to remote sensing imagery to provide a measurement of DOM in water bodies | FALSE | None of the Above | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0156 | PRObability of Streamflow PERmanence (PROSPER models) [2024 INV#WO0000000109074] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | predictions of reliable surface flow in streams at regional scales to inform land and water resource decisions related to water availability. | more accurate estimates of water availability for more efficient use of field verification efforts, expected increased success of restoration or species conservations projects. | prediction of the annual probability of a stream reach having year round surface flow. | 12/01/2019 | Developed in-house | FALSE | prediction of the annual probability of a stream reach having year round surface flow. | flow/no flow field observations and publicly available gridded spatial datasets to describe climate and physiography | https://www.sciencebase.gov/catalog/item/5c12a499e4b034bf6a85eabd | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0155 | Machine Learning for automatic fracture mapping and rock identification [2024 INV#WO0000000109499] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Machine learning algorithms are being used to improve detection and characterization of fault surface geometries using the spatial patterns of earthquake locations. We have improved our ability to generate long, continuous, fault surface representations and have implemented non-planar machine learning based fitting approaches. | Improved fault geometries that advance our ability to accurately model earthquake hazards. | The outputs are 3D fault models that are meshed at the user's specified resolution, fault model quality metrics, and a fully 3D render of the fault surfaces. | 03/01/2025 | Developed in-house | FALSE | The outputs are 3D fault models that are meshed at the user's specified resolution, fault model quality metrics, and a fully 3D render of the fault surfaces. | We used publicly available earthquake catalogs from northern California, specialized high-resolution aftershock catalogs, an existing fault models to evaluate model performance. | FALSE | None of the Above | FALSE | https://code.usgs.gov/esc/surf/ | |||||||||||||
| Department Of The Interior | USGS | DOI-0153 | Machine Learning for Rapid Earthquake Magnitude Estimation | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A machine learning algorithm that utilizes statistics of earthquake waveforms to determine whether an earthquake is large enough to warrant an earthquake early warning alert, with applications to the earthquake early warning system. | Improve accuracy and speed of earthquake magnitude determination for earthquake early warning. Public safety benefits from a faster, more accurate warning system. | Rapid determination of whether an earthquake is large enough to warrant an alert or not. | FALSE | Rapid determination of whether an earthquake is large enough to warrant an alert or not. | FALSE | None of the Above | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0152 | Lead attribution model | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Classification model identifying soils potentially contaminated by lead from battery recycling | Model is intended to be used by a state agency in California to identify soils potentially affected by battery recycling | Classification of a soil as either likely affected by battery recycling or not | FALSE | Classification of a soil as either likely affected by battery recycling or not | FALSE | None of the Above | FALSE | The code will be publicly available on ScienceBase as model archive after development | In-progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | In-progress | ||||||||||
| Department Of The Interior | USGS | DOI-0149 | discrimination among biological radar target types detected by NEXRAD | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | USGS | DOI-0151 | target discrimination on portable radar | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This application of machine learning is intended as a pilot effort to discriminate among radar target types, specifically between flying animals and precipitation. | quantify moth movement into the Yellowstone ecosystem, or rather, the calories they represent as a food source. Army cutworm moths are a critical but poorly understood food for grizzly bears | The algorithm outputs a probability of whether a radar target is biological or precipitation. | Developed in-house | FALSE | The algorithm outputs a probability of whether a radar target is biological or precipitation. | Training data were derived from USGS-operated portable radars and scored by USGS personnel. No data was provided to the agency. | FALSE | None of the Above | FALSE | |||||||||||||||
| Department Of The Interior | USGS | DOI-0150 | AI to survey boat traffic | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We are using AI to collect boat traffic times when traveling in a specific area by scanning video. Typically, this type of data would be collected by manually watching video and recording boat entrance and exit times. Using AI, we can dramatically lower the time a person monitors videos, making a more efficient use of the employee's time at work. | cost savings | Boat entrance and exit times from a specific area. | Purchased from a vendor | Ultralytics | TRUE | Boat entrance and exit times from a specific area. | Vendor provided training data and evaluated performance; we used default model and did not use additional training data. | FALSE | None of the Above | FALSE | www.ultralytics.com/yolo | |||||||||||||
| Department Of The Interior | USGS | DOI-0146 | Machine learning for stream velocity prediction | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | To predict stream velocity from streamflow and geographic attributes. | USGS possess unusually deep knowledge of biological radar targets which can readily be confused with drones, since both fall into the category of "low and slow" flying targets. | Prediction of stream velocity at a location. | FALSE | Prediction of stream velocity at a location. | FALSE | None of the Above | FALSE | Not yet published. | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0148 | Machine Learning for Avalanche Frequency Modeling | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The machine learning (Random Forest) was used to identify vegetation characteristics in avalanche paths. This helps determine avalanche return periods in specific avalanche paths. | Benefits include results that inform avalanche forecasters, transportation departments, and infrastructure planners on estimating spatial extents of avalanche return periods. | The Random Forest model outputs vegetation classification that is necessary for identifying avalanche return periods. | Developed in-house | FALSE | The Random Forest model outputs vegetation classification that is necessary for identifying avalanche return periods. | We used high point density lidar data. Data collection and processing adhered to a maximum nominal post spacing of 0.35 m, with a mean point density of 16 points/m2. We also used four-band (red, green, blue, and near-infrared) NAIP imagery at 0.6-m horizontal spatial resolution. | FALSE | None of the Above | FALSE | |||||||||||||||
| Department Of The Interior | USGS | DOI-0147 | Improved earthquake detection for research studies [2024 INV#WO0000000108499] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Deep learning methods are being used to improve detection of earthquakes to provide more complete, high-resolution catalogs that are used in research to better understand earthquake occurrence, rupture processes and seismic hazard. | Better understand earthquake occurrence, rupture processes and seismic hazard. | More complete, high-resolution catalogs | Developed in-house | FALSE | More complete, high-resolution catalogs | FALSE | None of the Above | FALSE | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0143 | Predictive AI applications for estimating water quality constituents as causal factors of harmful algal blooms. | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Ensemble regressions to predict suspended sediment, total nitrogen, total phosphorus, algal pigments, and algal cell abundances and image-based estimation of suspended sediment concentration via machine learning used in combination with custom developed python code to help build on our correlative understanding of what drives harmful algal blooms and introduces a technique to better understand and attribute causality. | Better understanding of the causes of harmful algal blooms in order for localities and state agencies to take proactive action to prevent harmful algal blooms that potentially have impact on life and property. | Estimates of water quality constituents: suspended sediment, total nitrogen, total phosphorus, algal pigments, and algal cell abundances. | FALSE | Estimates of water quality constituents: suspended sediment, total nitrogen, total phosphorus, algal pigments, and algal cell abundances. | FALSE | No privacy use case | None of the Above | FALSE | https://www.python.org/ | No privacy use case | ||||||||||||||
| Department Of The Interior | USGS | DOI-0142 | Biotic and abiotic drivers of the prevalence of a tick and associated vector-borne disease | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Ticks are one of the most important vectors of disease in North America; however, their presence in desert ecosystems is often underestimated. The Gulf Coast tick (Amblyomma maculatum) has been recently discovered in Arizona and has the potential to transmit a bacterial spotted fever (Rickettsia parkeri) to humans and other animals. We do not yet understand why this tick has recently appeared in Arizona. | Data will be used to inform management and conservation activities in the region and better understand the risk of emerging disease to Arizona wildlife and people. | Identify the relative importance of biotic and abiotic factors driving tick emergence. | FALSE | Identify the relative importance of biotic and abiotic factors driving tick emergence. | FALSE | None of the Above | FALSE | Not yet available | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0141 | Enhancing Community and Wildlife Resilience to Sea?Level Rise and Infrastructure Development in the San Pablo Baylands | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Considerable public dollars will be invested in both tidal marsh restoration and transportation upgrades in the Baylands; yet the combined and interactive effects of SLR and infrastructure changes on wildlife habitat, local communities, and public access are largely unknown. | The goal of our project is to understand the potential impacts of SLR and transportation redesigns on the Baylands and identify management actions that could be taken to achieve desired future habitat and public access targets. | Wildlife-habitat relationships will be modeled using boosted regression tree or similar machine learning method | FALSE | Wildlife-habitat relationships will be modeled using boosted regression tree or similar machine learning method | FALSE | None of the Above | FALSE | Not yet available | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0139 | Predicting from the past - identifying characteristics of invasion-resistant and invasion-prone waterbodies to aid horizon scanning | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Machine learning and statistical modeling will be used to leverage region-wide waterbody invasion histories and datasets on the physical, biological, chemical, anthropogenic, and geographic characteristics of these waterbodies to: a) identify those variables that increase or decrease invasion risk, b) categorize all waterbodies in the region based on their invasion risk, and c) provide a decision support tools for managers and policy makers to identify at-risk sites. | AI/ML will provide a decision support tool for managers and policy makers to identify waterbodies vulnerable to invasion by non-native fishes and potential actions that could be implemented to reduce risk of invasion. | Waterbodies vulnerable to invasion | FALSE | Waterbodies vulnerable to invasion | FALSE | None of the Above | FALSE | Not yet available | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0144 | Adaptive Management with AMMonitor | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Automated species identification from remote sensed data (images and audio files) | Using AI to accurately identify species from remotely sensed data will save thousands of hours of personnel time previously required and would allow researchers to inexpensively capture extensive data sets on species abundances and distributions. | Species identification from remotely sensed data | 01/01/2022 | Developed in-house | FALSE | Species identification from remotely sensed data | Imagery and audio data from remotely sensed data | FALSE | None of the Above | FALSE | https://www.usgs.gov/software/ammonitor | |||||||||||||
| Department Of The Interior | USGS | DOI-0138 | Using High-Resolution Imagery and Artificial Intelligence to Support Climate Change Resilience in Agroforestry Across the Pacific | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | On remote Pacific islands and outer atolls, agroforestry (i.e., the cultivation and conservation of trees for agriculture) provides food security and income to local communities. Growing instability from climate change and invasive species like the coconut rhinoceros beetle threaten these resources. Actively managing and sustaining agroforestry resources requires detailed and up-to-date knowledge of forest inventories and conditions. | The results from this work can be used by smallholder coconut farmers and processors and local and national government agencies to better manage agroforestry resources for coconut, pandanus, and other species of interest across the Pacific Island region. | Project researchers will build capacity for conducting detailed agroforestry assessment and monitoring in Pacific Island nations, by using imagery collected from small unmanned aerial systems (sUAS or drones) and custom computer algorithms to automatically detect and monitor the health of coconut trees and other species of importance. | FALSE | Project researchers will build capacity for conducting detailed agroforestry assessment and monitoring in Pacific Island nations, by using imagery collected from small unmanned aerial systems (sUAS or drones) and custom computer algorithms to automatically detect and monitor the health of coconut trees and other species of importance. | FALSE | None of the Above | FALSE | Not yet available | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0140 | Forecasting Earthquake Ground Motion Time Series [2024 INV#WO0000000109733] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Development of a deep learning models to generate earthquake ground motion time series for potential application to Earthquake Early Warning, Operational Aftershock Forecasting, and the National Seismic Hazard Model. | Improved Earthquake Early Warning, Operational Aftershock Forecasting, and the National Seismic Hazard Model | Generate earthquake ground motion time series | Developed in-house | FALSE | Generate earthquake ground motion time series | FALSE | None of the Above | FALSE | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0136 | Refining Flood Risk Predictions in Hawai?i with Generative Machine Learning | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | While global climate models (GCMs) offer climate projections, their coarse spatial resolution does not allow regional characteristics to be captured accurately, which are details crucial for managing risks related to climate hazards like floods, droughts, and heatwaves. | Recent breakthroughs in generative artificial intelligence (AI) with neural networks offer new possibilities for for statistical downscaling. | This project will use these generative AI advancements to create high-resolution (1km) daily and sub-daily precipitation maps for the Hawaiian Islands and use these maps to quantify the risk of flooding events. | FALSE | This project will use these generative AI advancements to create high-resolution (1km) daily and sub-daily precipitation maps for the Hawaiian Islands and use these maps to quantify the risk of flooding events. | FALSE | None of the Above | FALSE | Not yet available | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0134 | Machine Learning for High-Resolution Downscaling in the Hawaiian Islands | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Currently, models of global climate change lack the resolution needed to model the processes that create most of Hawai?is rainfall. | predict precipitation at locations where no measurement data is available, using rainfall measurements from nearby locations | Using these improved spatial interpolation models, this project will create high-resolution, accurate historical rainfall maps. The project will also test the method for projecting future rainfall and compare predictions to existing statistical downscaling models. | FALSE | Using these improved spatial interpolation models, this project will create high-resolution, accurate historical rainfall maps. The project will also test the method for projecting future rainfall and compare predictions to existing statistical downscaling models. | FALSE | None of the Above | FALSE | Not yet available | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0137 | Random forest models for predicting water quality of inland waters from remotely sensed imagery | Pilot The use case has been deployed in a limited test or pilot capacity. | Energy & the Environment | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Inland waterbodies (i.e. rivers, lakes, reservoirs, ponds, etc.) can face issues of poor water quality which may pose issues to water users, water infrastructure, and ecosystems. While it is important to monitor the water quality of these waterbodies, it can be costly and labor intensive to continuously monitor sites across the US. This project aims to improve the efficiency of monitoring by using Machine Learning (Random Forest) models to predict two water quality parameters of waterbodies seen in remotely sensed images. These remotely sensed images span the entire Conterminous US (CONUS) and are collected approximately every 5 days, allowing for water quality monitoring at high temporal and spatial resolutions with relatively low effort and cost. | cost savings by estimating water quality parameters across the entire country on a frequent (5 day) schedule with minimal labor. By making these modeled water quality products publicly available, it may aid in managing water resources | Prediction of chlorophyll concentrations and turbidity values in remotely sensed images of water bodies. | 05/01/2025 | Developed in-house | FALSE | Prediction of chlorophyll concentrations and turbidity values in remotely sensed images of water bodies. | Data used to train the models are remotely sensed images from the European Space Agency's Sentinel-2 satellite program and in-situ USGS water quality data | https://www.sciencebase.gov/catalog/item/640f612dd34e254fd352e1ed | FALSE | None of the Above | FALSE | https://www.sciencebase.gov/catalog/item/68a8c2a0d4be026335794295 | ||||||||||||
| Department Of The Interior | USGS | DOI-0133 | Global Detection of Historical Harmful Algal Blooms via combined Satellite Data and Deep Learning Methods | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Create a global-scale geospatial database of HABs occurrences over the past 40 years and compile in-situ measured HAB events and related parameters, remotely sensed data, and machine learning methods. | Provide resource managers, researchers, and the public with a novel ability to track and compare HABs occurrences across time and space, which could identify systematic trends and patterns that affect multiple inland water bodies. | Create a global-scale geospatial database of HABs occurrences over the past 40 years and compile in-situ measured HAB events and related parameters, remotely sensed data, and machine learning methods. | FALSE | Create a global-scale geospatial database of HABs occurrences over the past 40 years and compile in-situ measured HAB events and related parameters, remotely sensed data, and machine learning methods. | FALSE | None of the Above | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0135 | Classifying CWD-Infected Elk Using Recurrent Neural Networks on GPS Movement Data | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Chronic Wasting Disease is difficult and costly to diagnose using traditional biological testing. However, the disease affects an elk's behavior and movement over time. By analyzing GPS tracking data with AI specifically, Recurrent Neural Networks (RNNs) the model aims to: Automatically identify movement patterns indicative of CWD infection, Enable earlier or non-invasive detection of disease, and Support wildlife disease surveillance and management decisions | helps wildlife managers detect CWD in elk early using GPS movement data, reducing the need for costly and invasive testing. It supports faster, data-driven decisions for disease control and wildlife management, protecting both animal health and resources | The AI system outputs a classification label for each elkCWD-infected or not infectedbased on patterns in their GPS movement data. | 01/01/2025 | Developed in-house | FALSE | The AI system outputs a classification label for each elkCWD-infected or not infectedbased on patterns in their GPS movement data. | The model is trained and evaluated using GPS movement data from collared elk, labeled with confirmed CWD infection status through biological testing. | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0131 | Feature extraction and characterization for update of The National Map | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Reduce time and level of effort for update of The National Map datasets which include the 3D Elevation Program, 3D Hydrography Program, geographic names, and topographic mapping. | Increased efficiency, reduced costs, error reduction, and more timely updates to national geospatial products including maps, GIS data, and web services. | Recommendations for operator review, consideration, and further evaluation. | FALSE | Recommendations for operator review, consideration, and further evaluation. | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | ||||||||||||
| Department Of The Interior | USGS | DOI-0130 | Supervised learning for phycocyanin estimation from Sentinel-2 MSI in inland waters with independent validation | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Need to accurately predict specific pigment concentrations directly from satellite data | Cost savings related to prediction of specific pigment concentrations directly from satellite data | Create framework for model development and demonstration of model ability to identify various pigments directly from satellite data | FALSE | Create framework for model development and demonstration of model ability to identify various pigments directly from satellite data | FALSE | None of the Above | FALSE | available in-house, not publicly released yet | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | |||||||||||
| Department Of The Interior | USGS | DOI-0132 | Four AI systems using different strategies to identify, classify and locate both volcano-tectonic and non-traditional volcanic earthquakes | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identifying and classifying volcanic earthquakes, which can be very different from traditional tectonic earthquakes. These are addressed with CNN's, hidden Markov models, and pre-trained neural networks for picking P- and S-waves in seismograms, hierarchial clustering, identifying non-traditional volcanic earthquakes (e.g., long-period, degassing events, bubble collapse, and assigning magnitudes). | These methods will automatically detect and classify earthquakes in large data sets without human review. In the case of identifying small earthquakes, the method has detected 4.5x more earthquakes than human review. | Catalogs of earthquakes | 01/01/2011 | Developed in-house | TRUE | Catalogs of earthquakes | Seismic waveforms and spectrograms | FALSE | None of the Above | FALSE | https://doi.org/10.1785/0120240240 | |||||||||||||
| Department Of The Interior | USGS | DOI-0125 | MAGMA-net: Multimodal analysis for geophysical monitoring of activity | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identifying volcanic deformation in satellite imagery combined with ground-based GNSS | Automated detection of volcanic deformation in vast amounts of satellite and ground-based deformation data. | Decision on deformation state of a volcanic system | FALSE | Decision on deformation state of a volcanic system | FALSE | None of the Above | FALSE | https://code.usgs.gov/vsc/geodesy/projects/magma-net | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0123 | Wildfire Risk Assessment & Predictive Modeling | Pilot The use case has been deployed in a limited test or pilot capacity. | Emergency Management | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Assess fire risk potential using historical data, fuel conditions, weather and other conditions. | Minimize loss of life and property | Prediction | 10/01/2023 | Developed with both contracting and in-house resources | Open Geospatial Consortium | FALSE | Prediction | Historical Fire Data, Fuel, weather data from federal, state, local, academia, commercial and other data sources. | FALSE | Other | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0129 | Mapping vegetation classes to understand wildfire fuel conditions | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Mapping vegetation classes that can be used to understand wildfire fuel condition across space and through time. | The machine learning mapping process we have developed can be use to more efficiently and effectively manage fuels for wildland fire | Predicted ecological states for the upper Colorado River Basin | 07/01/2025 | Developed in-house | FALSE | Predicted ecological states for the upper Colorado River Basin | Machine learning model was trained and tested using field data collected by BLM, NPS, and USGS, and predicted out using remote sensing data (LandSat) | https://doi.org/10.5066/P14WKTNS | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0127 | AI/ML For Hazard Detection in Austere Environments with Automated or Remotely-Operated Rovers [2024 INV#WO0000000107424] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identify unique or hazardous ground cover in the field of view of an autonomous robot's camera system | Improving scientific analysis, and if deployed operationally, the preservation of investment through hazard avoidance of expensive remote or autonomous equipment | Prediction of ground cover type | Developed in-house | FALSE | Prediction of ground cover type | Image and telemetry data from NASA PDS | FALSE | None of the Above | FALSE | |||||||||||||||
| Department Of The Interior | USGS | DOI-0126 | Species Distribution Models for Pectis imberbis, a Rare Plant Species in Southeastern Arizona | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | USGS | DOI-0124 | Animas River Metals Surrogate | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Estimating metal concentrations in the Animas River near Cedar Hill, NM. | Increase metal concentration temporal resolution by reducing the need to physically sample surface waters within the river system. | Predicted metal concentrations in the Animas River | FALSE | Predicted metal concentrations in the Animas River | FALSE | None of the Above | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0121 | [Un]supervised clustering of [non-]earthquake signals commonly recorded on regional seismic networks | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Surficial mass movements (SMMs), such as landslides and rockfalls, have seismic signatures distinct from other routinely-recorded seismic sources like earthquakes and explosions. This project aims to develop a classification scheme to differentiate between seismic signals generated by different sources, especially for those generated by vertical processes ("fall") versus those generated by horizontal processes ("slides"). | By automatically discriminating the different sources that generate observed seismic signals, we can more accurately catalog the events and respond to them in more timely, appropriate ways. | Events will be automatically classified as seismogenic (e.g., eathquakes or explosions) surficial mass motions (e.g., falls or slides) using statistical metrics extracted from real-time seismic waveform data. | FALSE | Events will be automatically classified as seismogenic (e.g., eathquakes or explosions) surficial mass motions (e.g., falls or slides) using statistical metrics extracted from real-time seismic waveform data. | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | In-progress | |||||||||||
| Department Of The Interior | USGS | DOI-0110 | Inference of PFAS precursor compositions | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This approach will allow for better identification of PFAS source zones and materials, and will enable better estimation of transport processes | Improved understanding of PFAS compositions, sources, and transport mechanisms will improve mitigation strategies. These improvements have the potential to reduce mitigation costs while improving overall outcomes. | Tabular data of predicted PFAS precursor compositions. | FALSE | Tabular data of predicted PFAS precursor compositions. | FALSE | Not applicable. | None of the Above | FALSE | https://pubs.acs.org/doi/10.1021/acs.estlett.0c00798 | Not applicable. | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | In-progress | ||||||||
| Department Of The Interior | USGS | DOI-0118 | Using Graph Neural Networks for development of nonergodic earthquake ground-motion models | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | May ultimately enable better hazard assessment, reduces potential disaster costs, and improves the accuracy of public hazard maps and building guidelines. | This work may help improves seismic ground motion predictions, guiding safer construction and disaster planning while reducing long-term costs. | Generates predictions of ground motion that account for local site and path effects, improving accuracy of seismic hazard models. | FALSE | Generates predictions of ground motion that account for local site and path effects, improving accuracy of seismic hazard models. | FALSE | None of the Above | FALSE | https://code.usgs.gov/ghsc/users/kwithers/nshmp-lib-1 | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0120 | Modeling Rupture Directivity Effects on Ground Motion Using Neural Networks | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We use Artificial Neural Networks (ANNs) to more accurately modeling ground motion amplification effects caused by rupture directivity during earthquakes. These effects are complex and challenging to capture using traditional methods, which can impact the accuracy of seismic hazard assessments. | computationally efficient method to improve seismic hazard modeling by incorporating rupture directivity effects | The outputs here are adjustments to the median and standard deviation of ground-motion model predictions. The AI system predicts the amplified ground motion effects due to rupture directivity and are used to refine seismic hazard assessments in a computationally lightweight manner. | 06/01/2024 | Developed in-house | TRUE | The outputs here are adjustments to the median and standard deviation of ground-motion model predictions. The AI system predicts the amplified ground motion effects due to rupture directivity and are used to refine seismic hazard assessments in a computationally lightweight manner. | The training data was derived from a publicly available rupture directivity model. | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0119 | Deep?learning Integrations into NEIC Operations [2024 INV#WO0000000109496] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The AI model improves the rapid detection of earthquakes. | Reduces the time analysts need to work on poorly located or incorrect earthquake detections. | The algorithms predicts the type of seismic phases detected, the distance the earthquake is from the receiver, and improves the event detection time. | 06/02/2021 | Developed in-house | TRUE | The algorithms predicts the type of seismic phases detected, the distance the earthquake is from the receiver, and improves the event detection time. | Models were trained on public seismic data. Training data set was published and is available. | https://doi.org/10.1785/0220200178 | FALSE | None of the Above | FALSE | https://code.usgs.gov/ghsc/neic/aiml/neic-aidem | ||||||||||||
| Department Of The Interior | USGS | DOI-0109 | LLM-Assisted Volcanic Alert Monitoring | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The primary goal of this project is to automate the monitoring of volcanic alert levels from the public websites of our partner observatories. | Currently, this process can involve manual checks, which are time-consuming. This project aims to create a rapid, efficient, and resilient method for gathering this critical data, increasing the quality of our situational awareness and value | Data collection | Developed in-house | FALSE | Data collection | None needed as a generic LLM is used. Data processed is public data. | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | ||||||||||
| Department Of The Interior | USGS | DOI-0117 | USGS Flow Photo Explorer [2024 INV#WO0000000109196] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The Flow Photo Explorer (FPE) is an integrated database, machine learning, and data visualization platform for monitoring streamflow and other hydrologic conditions using timelapse images. The goal of this project is to develop new approaches for collecting hydrologic data in streams, lakes, and other waterbodies, especially in places where traditional monitoring methods and technologies are not feasible or cost-prohibitive. FPE uses an artificial intelligence/machine learning (AI/ML) deep learning model to estimate relative streamflow using timelapse imagery. The model is trained using pairs of images for which a person (a.k.a. an annotator) has selected which of the two images in each pair appears to have more flow. From this, the model learns how to sort the images from lowest to highest apparent flow. The rankings of the sorted images then serve as indicators of the relative amount of streamflow. | cost savings, increased safety for employees and the public | Predicted relative flow hydrograph. | 10/01/2021 | Developed with both contracting and in-house resources | Walker Environmental Research LLC | FALSE | Predicted relative flow hydrograph. | The model is trained on human annotations of images captured by trail cameras. See the USGS Flow Photo Explorer to view training data and interface. Model performance is evaluated with timeseries streamflow data collected by collocated USGS streamgages. The streamflow data is made publicly available via the USGS National Water Information System (NWIS). | FALSE | None of the Above | FALSE | https://www.usgs.gov/software/streamflow-rank-estimation-sre-model | ||||||||||||
| Department Of The Interior | USGS | DOI-0116 | Synthesizing mapping and monitoring data to understand fluctuations in prairie dog colony size and densities in Theodore Roosevelt National Park [2024 INV#WO0000000109308] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Mapping prairie dog colonies in national is labor intensive and costly. One of the aims of this project is to develop open-source methods for mapping prairie dog colonies using satellite data. | Cost and time savings | Prediction and map of the location of prairie dog colonies. | Developed in-house | FALSE | Prediction and map of the location of prairie dog colonies. | FALSE | None of the Above | FALSE | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0115 | Coastal Change Likelihood: Synthesizing change factors using supervised learning | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A supervised machine-learning framework (support vector machine algorithm) is used to predict future decadal-scale coastal change and its primary driver by combining over 20 published coastal geospatial datasets that describe the coastal landscape and the hazards that affect it. | The Coastal Change Likelihood project is a computer-aided synthesis of the factors that determine future coastal landscape change that can be used by decision makers to support adaptation, mitigation, and prioritization of coastal zone resources and infra | The system produces maps of future coastal change, along with an indicator of the primary hazard(s) that produce the estimated change. | Purchased from a vendor | ESRI | TRUE | The system produces maps of future coastal change, along with an indicator of the primary hazard(s) that produce the estimated change. | NOAAs CCAP, NAIP imagery. Also data from the National Aerial Imagery Program. See publication for more information on performance evaluation: https://doi.org/10.2112/JCOASTRES-D-24-00072.1 | https://www.sciencebase.gov/catalog/item/61781c1bd34e4c6b7fe2a425 | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0114 | Improving Prediction Capabilities for Barrier Island Landscape Change | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This research uses several AI/ML tools to observe and analyze coastal landscape change at critical habitats along barrier islands. The work employs traditional ML methods (random forest model), and more cutting-edge methods, like foundation models (e.g. Segment Anything). AI tools that leverage human-in-the-loop methods (e.g. Doodler) enable efficient and reliable landcover map creation from aerial and satellite imagery and enhanced historical imagery that provides important long-term perspectives of landcover change. This work aids land management decision-making and risk assessments and provides data for model validation. | More accurate and longer-term information leads to more efficient and effective resource management, protecting property and national interests. | Data products include landcover maps and change maps; Publications describe model validation for coastal natural hazards and landscape evolution; Continued development of AI/ML methods and workflows to inform landscape change assessments. | 06/01/2023 | Developed with both contracting and in-house resources | Marda Science, LLC; ESRI; Python libraries (open-source) | FALSE | Data products include landcover maps and change maps; Publications describe model validation for coastal natural hazards and landscape evolution; Continued development of AI/ML methods and workflows to inform landscape change assessments. | Coastal aerial imagery and ground-reference data (example data release with attributed information is located at https://www.sciencebase.gov/catalog/item/67927873d34e88f5864c49b0) | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0113 | Lake-wide monitoring and assessment of Great Lakes fisheries with autonomous vehicles and image analysis | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Computer Vision | Fisheries monitoring and assessment | Improved management of economically valuable predator stocks that underpin recreational, economic, and subsistence fisheries of the Great Lakes | Whole-lake and site level estimates of species abundance for the Great Lakes | 10/01/2020 | Developed in-house | FALSE | Whole-lake and site level estimates of species abundance for the Great Lakes | Underwater imagery annotated for locations of individual fish, geologic substrate, and invasive species | https://www.sciencebase.gov/catalog/item/67b8c515d34e1a2e835b7ffa | FALSE | None of the Above | FALSE | https://code.usgs.gov/great-lakes-science-center/computer-vision/fishscale | ||||||||||||
| Department Of The Interior | USGS | DOI-0112 | RSCC and TCA projects [2024 INV#WO0000000108017] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Automated identification of coastal features in remote sensing data, reduced analyst time | Increased efficiency in identifying coastal features in remote sensing data compared to existing methods that are human analyst time intensive. Increased efficiency was tested by timing traditional methods versus AI/ML methods | Elevation and position of dune crest and toe, position of shoreline | Developed in-house | FALSE | Elevation and position of dune crest and toe, position of shoreline | Data used to train and evaluate the model came from various USGS data releases of dune morphology and shorelines. | https://marine.usgs.gov/coastalchangehazardsportal/ | FALSE | None of the Above | FALSE | https://marine.usgs.gov/coastalchangehazardsportal/ | |||||||||||||
| Department Of The Interior | USGS | DOI-0107 | Storm Induced Erosion Response Network | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Tool is used for separation (segmentation) of land and water in images. The resulting mask is used to calculate water levels. Tool will be used to compare to forecasted water levels and may be displayed on USGS webpages | Fast, efficent, and less expensive data processing than previous method, which was labor-intensive | Eroded dune volume, beach volume, beach width, dune crest elevation | FALSE | Eroded dune volume, beach volume, beach width, dune crest elevation | FALSE | na | None of the Above | FALSE | code in development/software release planned | na | ||||||||||||||
| Department Of The Interior | USGS | DOI-0106 | Accelerating scientific discovery through AI-driven literature synthesis and meta-analysis using large language models [2024 INV#WO0000000201793]] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing | A team of USGS researchers are conducting a review of literature on drought and its affects on the western United States. Due to a large volume of literature, we are using AI to facilitate an efficient and reliable literature synthesis that can inform decision-making. These include effectively ranking most relevant literature to questions, identifying gaps in the literature, identifying data gaps, and coalescing currently known information. | These efforts will help expedite reviewing thousands of scientific studies that can support the project, but also support future efforts of compiling relevant information for NEPA planning or related tasks. | The outputs from AI will include identifications of study areas, relatedness to multiple topics (drought characteristics, hydrological and ecological processes, and hydrological and ecological responses). Using multiple LLMs and deep learning methods to evaluate the same context, we will have comparative information for model ensembles. | FALSE | The outputs from AI will include identifications of study areas, relatedness to multiple topics (drought characteristics, hydrological and ecological processes, and hydrological and ecological responses). Using multiple LLMs and deep learning methods to evaluate the same context, we will have comparative information for model ensembles. | FALSE | Other | FALSE | Code will be published at the end of the project and currently resides as private on USGS GitLab. | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0102 | Enhancing U.S. critical mineral supply chains through AI and remote sensing mapping of legacy mine sites and tailings | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Reinforcement Learning | There is growing interest in producing critical minerals and energy-related commodities within the United States to reduce dependence on foreign sources. However, many legacy mine sites remain poorly documented, with unknown locations, extents, and conditions, limiting opportunities to reassess the potential for extracting valuable materials from existing tailings and waste piles. | develop a scalable, data-driven framework for identifying and prioritizing legacy mine sites with potential for critical mineral recovery. offers a cost-effective, repeatable method for improving national inventories of mine waste and tailings | A data set of mapped mines and/or mine tailings for two regions in different ecosystems. | FALSE | A data set of mapped mines and/or mine tailings for two regions in different ecosystems. | FALSE | None of the Above | FALSE | Depending on outcome of the project, the code will be published. | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0104 | Improving recreation opportunities and access to public lands through machine learning and transportation planning | Pre-deployment The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Increasing access to public lands for recreation starts with effective transportation planning. However, most states lack comprehensive transportation datasets, including traffic volume estimates on roads accessing public lands. | Temporal traffic estimates across all roads with Colorado will improve how FHWA allocates money and addresses needs for increasing access to public lands. These data will also improve CDOT and other federal agencies in their transplantation planning needs | Temporal traffic estimates (2013-2023 minimally) across all state and federal roads, including recreational roads. | FALSE | Temporal traffic estimates (2013-2023 minimally) across all state and federal roads, including recreational roads. | FALSE | None of the Above | FALSE | Code will be published at the end of the project and currently resides as private on USGS GitLab. | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0108 | Catalog of stock ponds using machine learning | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | There are many undocumented stock ponds that retain water for use by farmers and ranchers throughout the landscape in the Dakotas. The stock pond AI model could identify unknown stock ponds to water managers for use in contaminant monitoring or water budget analysis and drought mitigation. | Identifying stock ponds in North Dakota using an automated ML model rather than manually doing so increases efficiency at the state level that could free up labor for other tasks. | The stock pond ML algorithm outputs predicted locations of stock ponds and dams in lidar images in jpg format and a csv file of locations given in latitude and longitude. | 04/01/2026 | Developed in-house | FALSE | The stock pond ML algorithm outputs predicted locations of stock ponds and dams in lidar images in jpg format and a csv file of locations given in latitude and longitude. | The model is trained on the publicly available lidar data disseminated by the North Dakota Department of Water Resources using the U-Net image segmentation architecture in PyTorch. The model is evaluated during training based on validation images through the use of RMSE. After the model development the output will be evaluated against known stock pond locations downloaded from the NDDWR and the U.S. Geological Survey National Hydrologic Data. | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0105 | Experimental Forecast for River Chlorophyll | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Forecast total chlorophyll concentrations in streams across various locations, part of the Ecological Forecasting Initiative (EFI) which is a collaboration between USGS and this research organization. | Part of an experimental process to visualize stream forecasts for chlorophyll, improve model refinements, improve model accuracy, test understanding of drivers of harmful algal blooms. | Predictions - near-term forecasts (up to 30 days in advance) of total stream chlorophyll with separate graphs for model drivers and their trends/forecasts. | 02/12/2025 | Developed in-house | FALSE | Predictions - near-term forecasts (up to 30 days in advance) of total stream chlorophyll with separate graphs for model drivers and their trends/forecasts. | Streamgage observational data and NOAA Global Ensemble Forecast System (GEFS). all publicly available data | FALSE | None of the Above | FALSE | https://zenodo.org/records/7065579 | |||||||||||||
| Department Of The Interior | USGS | DOI-0101 | Multiple machine-learning estimation of groundwater levels and trends for the regional Mississippi River Valley alluvial aquifer | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Improved water management at regional scales. | Prediction of groundwater levels at regional scales for better water management. | water level predictions | FALSE | water level predictions | FALSE | None of the Above | FALSE | https://code.usgs.gov/map/gw/levs/covMRVAgen1 | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0099 | Lake Champlain Cyanobacteria Bloom modeling | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Developing prediction for harmful algal bloom occurrence | Better understanding of the risks of harmful algal blooms at recreational areas | prediction | FALSE | prediction | FALSE | None of the Above | FALSE | https://www.r-project.org/ | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0098 | Forest metrics at Redwood National and State Parks | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We are developing a convolutional neural network in TensorFlow to predict the presence of treefall gaps based on Sentinel-2 imagery. | This information will help active timber management and will reduce wildfire risk and increase forest resilience. | Maps of forest canopy gaps | FALSE | Maps of forest canopy gaps | FALSE | None of the Above | FALSE | The code is on USGS GitLab | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0100 | Predicting post-fire tree mortality | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We used random forests to select predictor variables for models of individual tree mortality following fire. | These science products strengthen wildland fire and timber management practices. | Predictive models of postfire tree survivorship. | 10/01/2021 | Developed in-house | FALSE | Predictive models of postfire tree survivorship. | A database of fire effects to describe the determinants of fire-induced tree mortality. | https://research.fs.usda.gov/firelab/products/dataandtools/fire-and-tree-mortality-database | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0096 | Fire Effects at Whiskeytown National Recreation Area and Lava Beds National Monument | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We used random forests to predict the mortality class (live/dead) of lidar derived tree approximate objects and vegetation conditions. | These science products will strengthen wildland fire and timber management practices. | Maps post-fire seed dispersal and tree survival following wildfire at Whiskeytown NRA and vegetation cover after wildfires at Lava Beds National Monument. | FALSE | Maps post-fire seed dispersal and tree survival following wildfire at Whiskeytown NRA and vegetation cover after wildfires at Lava Beds National Monument. | FALSE | None of the Above | FALSE | The code is on USGS GitLab | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0095 | Global food-and-water security-support analysis data (GFSAD) project [2024 INV#WO0000000107073] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | 1. Landsat-derived rainfed and irrigated area-product of Conterminous United States (LRIP30) 2. Landsat-derived global cropland extent product @ 30 m (LGCEP30) 3. Landsat-derived global rainfed and irrigated area product @ 30 m (LGRIP30) | 1. assessing Nation's irrigated and rainfed cropland areas 2. critical to assessing crop water use, crop water productivity, and crop water savings 3. providing information on global irrigated and rainfed cropland area maps and statistics | Producing irrigated and rainfed cropland maps and statistics of United States of America (USA) and the world using Landsat and other similar satellites at 30 m or better spatial resolution. For example: https://www.usgs.gov/apps/croplands/app/map. Numerous datasets and scientific papers are published on this work. Please visit: www.usgs.gov/wgsc/gfsad30 | Developed in-house | FALSE | Producing irrigated and rainfed cropland maps and statistics of United States of America (USA) and the world using Landsat and other similar satellites at 30 m or better spatial resolution. For example: https://www.usgs.gov/apps/croplands/app/map. Numerous datasets and scientific papers are published on this work. Please visit: www.usgs.gov/wgsc/gfsad30 | All data used in our AI models (ML\DL) are publicly available from sources such as U. S. Geological Survey (USGS), NASA, European Space Agency (ESA). These data are Landsat and Sentinel Satellites of USGS\NASA and ESA. We also use USDA Cropland Data Layer (CDL) for training and testing models. All these data are publicly available online. | https://doi.org/10.3133/pp1868 | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | |||||||||
| Department Of The Interior | USGS | DOI-0094 | Using deep learning to classify potential piping plover habitat along the Upper Missouri River | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Computer Vision | The U.S. Army Corps of Engineers is required to assess piping plover habitat on the Missouri River annually, per a Biological Opinion. We have been using classification tools to map these habitats for over a decade. However, these methods are outdated and we will deep learning tools to automate the mapping of these habitats. This will create efficiencies once we have a production-scale model. | reduce time and effort, shorter customer wait times, allows shifting efforts to value-added science | Prediction map of piping plover habitat | Developed in-house | FALSE | Prediction map of piping plover habitat | We are training a UNET 3+ model using 3-m resolution satellite data acquired from PlanetScope through an agreement with the US Army Corps of Engineers. We can also acquire this same data through an interdepartmental agreement between NASA and DOI. | FALSE | None of the Above | FALSE | |||||||||||||||
| Department Of The Interior | USGS | DOI-0091 | Diploid Detector/Triploid Tracker for Grass Carp | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | There are expected benefits from the ability to determine whether Grass Carp can reproduce (diploid, with two sets of chromosomes) and thus take over and damage an ecosystem, or whether the captured Grass Carp have three sets of chromosomes (triploid) and were hatchery produced and will not reproduce: (1) the fish can be eliminated immediately (2) law enforcement can address illegalities per state per the Lacey Act (3) the ecosystem will be maintained and protected from reproductive Grass Carp. | science for development, evaluation, and refinement of technologies for preventing the spread of Grass Carp to uninvaded waters, and for reducing populations where where they are present or established. | The output is the prediction that an individual Grass Carp is triploid or diploid. The final product should be able to be used with additional fish species. | FALSE | The output is the prediction that an individual Grass Carp is triploid or diploid. The final product should be able to be used with additional fish species. | FALSE | na | None of the Above | FALSE | YOLO. https://www.opencv.ai/blog/yolo-unraveled-a-clear-guide | na | ||||||||||||||
| Department Of The Interior | USGS | DOI-0092 | Frog vocalization recognition from digital recordings | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Automated audio recorders make it easy to gather large amounts of digital audio recordings where frogs may be vocalizing. These recordings are too numerous to make it cost effective to listen to all of them. AI helps us search the recordings for calls of specific species to determine if they were there without us listening to thousands of hours of recordings. | We are using this method to screen for Cuban treefrogs, which are in invasive species in the southern U.S. This will allow us to increase our early detection and rapid response to detections at sites that were not previously known to have the species. | AI outputs a list of vocalizations that it identifies as possibly the frog of interest. | 11/04/2024 | Developed in-house | FALSE | AI outputs a list of vocalizations that it identifies as possibly the frog of interest. | example recordings of frog vocalizations | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | |||||||||
| Department Of The Interior | USGS | DOI-0090 | Patterns in the Landscape Analyses of Cause and Effect | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | ML satellite image classification is being used to better map flooding and fire events/characteristics for more effect hazard management. | High quality mapping of disasters allows us to better understand the causes and effects. | Classified maps are based on a prediction based on the characteristics of the input training information and the spatial data used to predict the presence of a feature | Developed in-house | FALSE | Classified maps are based on a prediction based on the characteristics of the input training information and the spatial data used to predict the presence of a feature | We compare results with ground and remote sensing based reference data | https://www.usgs.gov/centers/western-geographic-science-center/science/patterns-landscape-analyses-cause-and-effect#data | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | |||||||||
| Department Of The Interior | USGS | DOI-0088 | Reducing elevation error in coastal wetland digital elevation models | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The objective of this use case was to train/deploy a random forest regression model to reduce elevation error in a coastal wetland digital elevation model. | The model led to an enhanced digital elevation model for coastal wetland areas. | Prediction | 10/04/2022 | Developed in-house | FALSE | Prediction | In situ elevation data was collected and used for the training and testing data development. The predictor variables used in the model included elevation data and satellite imagery. | https://doi.org/10.5066/P9AL9DEM | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | ||||||||
| Department Of The Interior | USGS | DOI-0087 | Developing land cover maps for barrier islands using satellite imagery | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The objective of this ML/AL use case was to map land cover on barrier islands using satellite imagery. | Prediction of land cover types based on signatures from training data (i.e., pixels with class labels). | Prediction of land cover types based on signatures from training data (i.e., pixels with class labels). | Developed in-house | FALSE | Prediction of land cover types based on signatures from training data (i.e., pixels with class labels). | Satellite imagery was used for the mapping process. Photointerpetation and existing land cover maps were used for the training and testing dataset development. | https://www.sciencebase.gov/catalog/item/5a32ebe1e4b08e6a89d886b4 | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | |||||||||
| Department Of The Interior | USGS | DOI-0085 | Detecting and tracking reed canarygrass (Phalaris arundinacea) invasion in the Upper Mississippi River floodplain using remote sensing and artifial intelegence. | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We have a limited understanding of how the distribution of an invasive grass has changed over time. We seek to use satellite imagery to identify and track annual changes in the distribution of the invasive grass reed canarygrass. | This will lead to a better understanding of the dynamics (e.g., changes in inundation) that drive changes in the distribution of the invasive grass reed canarygrass to help achieve management objectives. | Predictions of reed canarygrass invasion across the Upper Mississippi River system at an annual or subannual time step. | FALSE | Predictions of reed canarygrass invasion across the Upper Mississippi River system at an annual or subannual time step. | FALSE | None of the Above | FALSE | We will publish the code in ScienceBase upon completion of the project. | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | |||||||||||
| Department Of The Interior | USGS | DOI-0082 | Hydrologic predictions for the Upper Mississippi River System using a hybrid deep learning approach. | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Utilize historical datasets of air temperature, precipitation, discharge, and water surface elevation to train a deep learning model to predict discharge and water surface elevation across the Upper Mississippi River System. | This model will be useful for filling gaps in the hydrologic record and understanding how changes in temperature and precipitation influence hydrology across the system. | predictions of discharge and water surface elevation | FALSE | predictions of discharge and water surface elevation | FALSE | None of the Above | FALSE | Code will be published in ScienceBase upon the completion of the project. | ||||||||||||||||
| Department Of The Interior | USGS | DOI-0084 | Estimates of Habitat Suitability of Reed Canarygrass (Phalaris arundinacea) in Upper Mississippi River Floodplain Forest Understories | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A better understanding of where in the Upper Mississippi River floodplain and invasive grass may occur. | This will help managers and researchers prioritize areas for restoration and further research. | Probability of habitat suitability for an invasive wetland grass called reed canarygrass. | Developed in-house | FALSE | Probability of habitat suitability for an invasive wetland grass called reed canarygrass. | We used a variety of remotely sensed and derived datasets developed by the USGS Lont-Term Resource Monitoring program at the Upper Midwest Environmental Sciences Center (landcover and inundation), and forest inventory data from the U.S. Army Corps of Engineers. | https://doi.org/10.5066/P9KBFBHW | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0083 | Integrate High-resolution Satellite Remote Sensing Data with Automated Machine Learning Techniques to Enhance Water Quality Assessment | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | understanding water quality in the Mississippi River using available data | improving understanding of water quality, prioritizing restoration of the Mississippi River | predictions | 09/02/2024 | Developed in-house | FALSE | predictions | public and private industry data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0061 | Invasive Carp Harvest Predictive Model | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | identify where invasive carp are congregating in large numbers in the Mississippi River for targeted harvest | target for commercial harvesters seeking invasive carp in the Mississippi River | predictions, maps | Developed in-house | FALSE | predictions, maps | multiple public datasets | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | ||||||||||
| Department Of The Interior | USGS | DOI-0081 | Submersed Aquatic Vegetation Vulnerability Evaluation Application (SAVVEA) | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Aid understanding of aquatic ecosystem constraints for vegetation growth | Define opportunities for aquatic ecosystem restoration that can cascade to improve water quality, hunting, fishing, bird watching, and recreational boating. | predictions | 04/10/2025 | Developed in-house | FALSE | predictions | publicly available information | https://doi.org/10.5066/P9QGD5NI | FALSE | None of the Above | FALSE | https://doi.org/10.5066/P9QGD5NI | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | |||||||
| Department Of The Interior | USGS | DOI-0080 | U.S. Wind Turbine Database | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Capture of geographic location of wind turbines from high resolution satellite imagery with object detection pipelines vs manual methods | Cost savings, efficiency in data capture | Global coordinates of turbine object entities | 06/01/2024 | Developed in-house | FALSE | Global coordinates of turbine object entities | High resolution satellite imagery chips from Maxar assets were used as training inputs | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | |||||||||
| Department Of The Interior | USGS | DOI-0076 | PFAS model of soils in the northeast | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We are predicting PFAS concentrations in soil across the northeast region | We intend to provide a map that will allow states to determine risk areas with high soil PFAS concentrations | Predictions of areas with elevated soil PFAS concentrations | FALSE | Predictions of areas with elevated soil PFAS concentrations | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | ||||||||||||
| Department Of The Interior | USGS | DOI-0078 | PFAS Groundwater Model | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We are building a model (likely random forest or boosted regression tree) to predict PFAS concentrations in groundwater supplies in the US. | Predictions of PFAS in groundwater across the US. The intention is that the results can serve to inform states where sampling is needed and to reduce costs associated with sampling areas that aren't likely to have PFAS. | prediction | FALSE | prediction | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | ||||||||||||
| Department Of The Interior | USGS | DOI-0077 | ATLAS | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing | Literature review and data compilation is the most time-consuming phase of the mineral resource assessment workflow (https://www.usgs.gov/media/images/usgs-mineral-resource-assessment-workflow). Identifying and extracting datasets referenced in historical documents, journal articles and published databases is currently done using traditional methods of browsing the web, downloading manuscripts and datasets, and extracting relevant data. The AI system will automatically extract metadata about datasets from manuscripts and make it available to users in a catalog. By supporting user queries of the extracted metadata, the system will enable assessment scientists to identify data sources much more quickly. The tool is intended to support other AI tools down the line that facilitate data extraction and synthesis. The system will also be useful for tracking lineage and usage of published datasets, helping to assure data quality and assess impact. | The AI will accelerate the mineral resource assessment process and provide a model for other types of resource assessments. By improving efficiency, accuracy, and transparency of assessment workflows, AI tools can help USGS deliver results more rapidly | A catalog of metadata about published datasets that supports user queries through agentic AI. | FALSE | A catalog of metadata about published datasets that supports user queries through agentic AI. | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | ||||||||||||
| Department Of The Interior | USGS | DOI-0079 | Flor-AI: Developing a Remotely Sensed Image Classification Method for Inventory and Monitoring of Flora in Digital UAS Imagery | Pilot The use case has been deployed in a limited test or pilot capacity. | Energy & the Environment | Pilot | c) Not high-impact | Not high-impact | Computer Vision | This project supports the management of oak-pine barrens on the Necedah National Wildlife Refuge, Wisconsin. Necedah NWR staff conduct habitat management actions (prescribed burning, mowing, seeding, and herbicide application) to increase wild lupine (Lupinus perennis) abundance for the endangered Karner blue butterfly (Lycaeides melissa samuelis). This project will use uncrewed aerial systems (UAS) to collect imagery and apply artificial intelligence to efficiently process imagery and detect lupine. Model performance and the overall project workflow will inform future efforts and may be applied to additional vegetation monitoring programs. | Benefits of using UAS and AI for this work include more efficient surveys of large landscapes, reduced costs associated with staffing for on-the-ground surveys, and faster generation of maps for planning management actions. | AI outputs will include the location, counts, and coverage area of wild lupine populations in the target areas. Imagery will also be processed into orthomosaics and published. | 11/01/2024 | Developed in-house | FALSE | AI outputs will include the location, counts, and coverage area of wild lupine populations in the target areas. Imagery will also be processed into orthomosaics and published. | High-resolution aerial imagery collected with a Wingtra UAS. Expert annotations provided by USFWS biologists. | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | |||||||||
| Department Of The Interior | USGS | DOI-0075 | Automating the Detection and Classification of Wildlife in Aerial Imagery [2024 INV#WO0000000109409] | Pilot The use case has been deployed in a limited test or pilot capacity. | Energy & the Environment | Pilot | c) Not high-impact | Not high-impact | Computer Vision | The tools and workflows developed by this project will be used by the Bureau of Ocean Energy Management (BOEM) to assess wildlife populations as part of planning and monitoring for offshore energy development. BOEM requires information on the environmental sensitivity of marine species and marine productivity to make informed decisions regarding offshore oil and gas energy infrastructure placement and mitigation measures. This is mandated by the Outer Continental Shelf Lands Act (OCSLA Section 18(2)(G)), which requires the Secretary of the Interior to consider these factors when determining the size, timing, and location of future lease sales for the National Oil and Gas Leasing Program. | This work will support BOEM's activities to manage resources within the OCS Planning Areas, informing decisions about lease area selection and mitigation strategies. Using AI will reduce the time, cost, and risk associated with aerial surveys | AI outputs will include the location and counts of birds and other wildlife in offshore areas. Imagery, annotations, and code will also be published. | Developed in-house | FALSE | AI outputs will include the location and counts of birds and other wildlife in offshore areas. Imagery, annotations, and code will also be published. | High-resolution aerial imagery collected by USFWS and published by USGS through ScienceBase. Expert annotations from New Jersey Audubon biologists published as dataset by USGS. | https://doi.org/10.5066/P16N2NAB | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | |||||||||
| Department Of The Interior | USGS | DOI-0074 | Automated identification of ducks from hunter-submitted photos using deep learning models | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | species identification of photographs of hunter-shot ducks for use in waterfowl harvest management | cost savings and improved efficiency in an operational monitoring program | composition of waterfowl harvest | 04/01/2024 | Developed in-house | FALSE | composition of waterfowl harvest | data consist of photographs of hunter-submitted ducks. Photos were collected by US FWS. | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | |||||||||
| Department Of The Interior | USGS | DOI-0073 | Automated photographic identification of Eastern box turtles | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Computer Vision | Matching photographs from a database of photos to determine capture history of individuals which can then be used in capture-recapture models to estimate population size and demographic parameters. | efficiencies in field data collection, allows data from community/citizen science to be used in monitoring activities, allows for the application of more sophisticated capture-recapture models | output is pairwise match or confidence scores and clusters of putative photo matches | 09/06/2024 | Developed in-house | FALSE | output is pairwise match or confidence scores and clusters of putative photo matches | esearch data set of photographs collected by USGS staff and volunteers at Patuxent Research Refuge | FALSE | Other | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | |||||||||
| Department Of The Interior | USGS | DOI-0072 | Machine learning in remote sensing-based wildfire and natural resource risk assessments | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Emergency Management | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Predict the risks of wildfire, drought, and invasive species spread on assets of value to the American public | Predictions inform land management planning and decision-making to mitigate risk and save American taxpayers millions of dollars in damage from the additive effect of these stressors. | Maps of risk, prediction of which factors contribute to risk, highlight areas for potential mitigation actions. | 11/01/2024 | Developed in-house | FALSE | Maps of risk, prediction of which factors contribute to risk, highlight areas for potential mitigation actions. | Publicly available satellite imagery, wildfire perimeters and burn severity, topographic information, drought indices | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | |||||||||
| Department Of The Interior | USGS | DOI-0071 | Pathogen identification in salmon | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Ichthyophonus, the most ecological and economically important pathogen of wild marine fish, is hypothesized to be a major driver of premature mortality in Yukon River Chinook salmon. Currently, evaluation of Ichthyophonus infection in salmon requires lethal sampling, which is often prohibited in rapidly declining populations. We are using a transcriptomic approach in conjunction with random forest modelling to identify a panel of biomarkers capable of identifying Ichthyophonus in salmon using a non-lethal muscle tissue sample. | ability to aid in the conservation of sensitive salmon species | prediction | 07/02/2025 | Developed in-house | FALSE | prediction | Data used to train the model has been generated in house via transcriptomic analysis of salmon muscle tissue | FALSE | None of the Above | FALSE | Faster mineral resource assessments. This AI use case was developed due to congress' mandate to assess critical minerals. | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | In-progress | |||||||
| Department Of The Interior | USGS | DOI-0070 | Telemetry Analysis Learning Algorithm (TALA) | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Applies machine learning to analyze operational data from agency-managed systems to support maintenance and reliability objectives. | Increased efficiencies in trending and analysis of satellite health and safety telemetry. | Alerts to limit violations and prediction of subsystem failures observed in satellite telemetry. | Developed in-house | TRUE | Alerts to limit violations and prediction of subsystem failures observed in satellite telemetry. | The tool helps identify trends and inform decision-making for system performance. | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | ||||||||||
| Department Of The Interior | USGS | DOI-0069 | LANDFIRE | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | USGS EROS has provided the expertise and staff to conduct LANDFIRE mapping for over 20 years. LANDFIRE products provide the nationally consistent, high-quality vegetation and fuels data needed to support wildfire management, land-use planning, ecosystem restoration, and habitat management. By producing and updating these datasets, EROS ensures that federal, state, tribal, and local partners all work from the same trusted information, improving coordination and decision-making. The national scope and scientific rigor of EROS mapping make LANDFIRE an indispensable foundation for assessing risk, allocating resources, and sustaining resilient landscapes across the United States. | LANDFIRE produces a variety of geospatial products to support the fire community, including Existing Vegetation Cover, Existing Vegetation Type, Existing Vegetation Height, Environmental Site Potential, Biophysical Settings, and more | LANDFIRE produces a variety of geospatial products to support the fire community, including Existing Vegetation Cover, Existing Vegetation Type, Existing Vegetation Height, Environmental Site Potential, Biophysical Settings, and more. | 01/01/2025 | Developed in-house | FALSE | LANDFIRE produces a variety of geospatial products to support the fire community, including Existing Vegetation Cover, Existing Vegetation Type, Existing Vegetation Height, Environmental Site Potential, Biophysical Settings, and more. | Landsat data are the foundation of LANDFIRE data products. Field plot to train and validate mapping and monitoring datasets include data from the Forest Inventory and Analysis (FIA) program, Natural Resources Conservation Service (NRCS), National Park Service, and other federal, state, and academic sources. | https://www.landfire.gov/data | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | ||||||||
| Department Of The Interior | USGS | DOI-0068 | Invasive Grass Mapping | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | USGS mapping of invasive exotic annual grasses is critical because these species, such as cheatgrass, alter ecosystems by increasing the frequency and intensity of wildfires and outcompeting native vegetation. Accurate, large-scale maps help land managers anticipate fire risk, prioritize restoration efforts, and guide grazing or herbicide treatments. Beyond fire management, these maps support wildlife habitat conservation, water resource protection, and long-term monitoring of ecosystem change, providing essential information for both local decision-making and national land management strategies. | DNNs have greatly improved our ability to accurately map invasive annual exotic grasses. We now produce weekly estimates of invasive grasses for the western US from early April until July, providing fire and land managers with critical data. | Weekly maps depicting invasive exotic annual grass extent and coverage for the western United States, with weekly products from early April to early July. | 04/04/2025 | Developed in-house | FALSE | Weekly maps depicting invasive exotic annual grass extent and coverage for the western United States, with weekly products from early April to early July. | Harmonized Landsat and Sentinel (HLS) data form the core dataset for mapping weekly exotic annual grasses (EAG). Bureau of Land Managements Assessment, Inventory, and Monitoring (AIM) plots are used to train deep learning models on HLS data to produce EAG estimates. | https://www.sciencebase.gov/catalog/item/67e29399d34ee7f142216b57 | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0067 | Evapotranspiration mapping and monitoring | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Mapping evapotranspiration (ET) with remote sensing is essential because it provides a consistent, large-scale view of how water is being used across landscapes. In agriculture, ET maps help farmers and water managers track crop water use, improve irrigation efficiency, and manage limited water resources more sustainably. Beyond farming, ET mapping supports drought monitoring, groundwater management, ecosystem health assessments, and fire fuel moisture assessments by showing how water and energy cycles vary over space and time. Without remote sensing, this kind of detailed, spatially explicit information would be impossible to obtain at regional or global scales. AI was used by USGS EROS to improve our ability to map ET, using multi-layer perceptrons and DNNs. | An improved ET product means a more informed farmer, allowing them to efficiently manage water resources and the crops that depend on them. The use of AI improves our ability to characterize surface temperature boundary conditions | ET maps for the Western US, used as input to the OpenET project. Also on-demand generation of global ET. | Developed in-house | FALSE | ET maps for the Western US, used as input to the OpenET project. Also on-demand generation of global ET. | Thermal data from the Landsat sensor is the primary data used in our SSEBop algorithm for mapping ET. For coarser scale applications, thermal data from MODIS or VIIRS are used. A reference ET variable required for mapping actual ET is dependent upon gridded meteorological data such as GRIDMET. | https://www.usgs.gov/landsat-missions/landsat-collection-2-provisional-actual-evapotranspiration-science-product | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0066 | National Land Cover Database (NLCD) [2024 INV#WO0000000107887] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Characterization of the physical land cover is critical for managing the lands, waters, and resources of the United States. The National Land Cover Database (NLCD) is easily one of the most widely used datasets produced within the Department of Interior, with applications including natural resource management, hydrology, biodiversity and habitat, energy and minerals, natural disasters, agricultural sustainability, and more. The criticality of these data demand 1) frequent updates, and 2) a characterization of how our landscapes are changing across time and space. Our linkage of three deep learning models, including a generative transformer-based AI model, has been vital for not only improving our characterization of the landscape with NLCD, but for enabling us to produce land cover products faster and more efficiently. The application of AI to this problem has saved the government valuable resources. | Our revamping of the NLCD methodology with a series of 3 linked deep learning models has allowed us to improve the product, reduce latency of delivery, and to produce that product more cheaply. | Maps of land cover change over time for the United States. | Developed in-house | FALSE | Maps of land cover change over time for the United States. | The National Land Cover Database is built on a foundation of Landsat data, with a mapping interval of 1985 to present. Evaluation of performance of NLCD algorithms depends upon a rigorous accuracy assessment that uses a stratified sampling approach, high-resolution geospatial data, and manual interpretation from land cover experts. High-resolution data is visualized in TimeSync software, sourced from Google Earth and associated commercial high-resolution data such as Maxar and Planet. | https://www.usgs.gov/centers/eros/science/annual-nlcd-data-access | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0065 | DARPAs CriticalMAAS [2024 INV#WO0000000108419; WO0000000096527] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identify patterns in the data, isolate those that correlate with known mineral deposits, and integrate these with other geospatial layers to generate a predictive map for guiding future mineral exploration. | Assessment of the domestic availability of critical minerals. Insight into the U.S. critical mineral supply landscape. Evaluation of national critical mineral supply chains. Analysis of domestic sources and supply of critical minerals. | Predictive maps | Developed in-house | TRUE | Predictive maps | Mineral Resource Data System, US MIN, National Map database | FALSE | None of the Above | FALSE | https://github.com/orgs/DARPA-CRITICALMAAS/repositories | ||||||||||||||
| Department Of The Interior | USGS | DOI-0064 | Willamette Regional IWAAs, gradient boosted ML stream temperature modeling. | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Improve model predictions of stream temperature across the basin over traditional methods. | The expected benefits and outcomes are improved stream temperature predictions, which is a cornerstone of the integrated water availability assessment for the region, and methods that are transferable across the basin. | The AI system outputs daily modeled stream temperatures. | 03/12/2025 | Developed in-house | FALSE | The AI system outputs daily modeled stream temperatures. | We are using publicly available stream temperature to train the model and a variety of hydroclimatic datasets to force the model. | https://doi.org/10.5066/P1M7G83G | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0063 | Mapping wood and wood hazards in the Willamette Basin, Oregon | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Other | Large wood in rivers pose hazards to people and infrastructure, particularly post wildfire. This work uses neural network models to map large wood in rivers from aerial imagery to understand how wood is distributed throughout river systems pre and post wildfire. | This work will help inform infrastructure hazards such as at dams and bridges, as well as to river users, such as rafters, inter-tubers, and fishermen. | The system output is mapping the location and extent of large wood pre and post wildfires | 10/01/2025 | Developed in-house | FALSE | The system output is mapping the location and extent of large wood pre and post wildfires | Training data is labeled imagery, such as "wood", "water", "sediment", "vegetation", and cross-learning from a previous iteration of the model developed in a different watershed. | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0062 | Automated power line extraction using deep learning | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Computer Vision | Provide a national, consistent, powerline dataset. | Cost savings over manual creation. Improves public access to information. | Geospatial dataset | 09/01/2025 | Developed in-house | FALSE | Geospatial dataset | Power lines from various sources | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | BOR | DOI-0058 | Aveva Predictive Analytics (APA) | Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A means to predict and prevent failure is at the forefront of operations and maintenance. One of the best economical means to accomplish this is through data anomaly detection and predictive maintenance for power generation applications. | The maintenance savings to Reclamation for the successful implementation of predictive maintenance is estimated to exceed $60 million dollars per year when implemented across Reclamations hydropower fleet. | Anomaly detection, fault diagnostics, analysis of startup/shutdown conditions, configurable model frequencies, alerts, case management, etc. | FALSE | Anomaly detection, fault diagnostics, analysis of startup/shutdown conditions, configurable model frequencies, alerts, case management, etc. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | OCIO | DOI-0057 | FBMS UPC Chatbot | Pre-deployment The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | An OIG draft finding identified a significant number of purchase requests where the "Other" UPCs were used instead of the IT related UPCs that would trigger an IT Approval. This misclassification undermines OCIO's ability to track IT spending | DOI staff that initiate purchase requests will interact with the UPC Chatbot Assistant so that the correct UPC code is used especially when planned purchases are IT related. | The correct UPC will be provided to the requestor and used on the Purchase Request in FBMS. Longer term solution would be for the auto generation of the Purchase request with the correct UPC pre-populated on the PR. It is also, still imperative that the FBMS user responsible for the PR must also validate the UPC as well as the IT Approver. | FALSE | The correct UPC will be provided to the requestor and used on the Purchase Request in FBMS. Longer term solution would be for the auto generation of the Purchase request with the correct UPC pre-populated on the PR. It is also, still imperative that the FBMS user responsible for the PR must also validate the UPC as well as the IT Approver. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BLM | DOI-0056 | General Land Office Records Modernization | Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Using optical character recognition and natural language processing, the GLO system will automatically extract text and data elements from scanned land record documents that define mineral and lands ownership. Specifically, publicly available land records including land patents (title deeds), survey plats and field notes, and other title documents. | reduced labor in collecting metadata for document discovery | digitized records / data extraction | FALSE | digitized records / data extraction | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BLM | DOI-0055 | AI Trapline for Fire | Pre-deployment The use case is in a development or acquisition status. | Emergency Management | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | This project would gather various information from various websites/applications after manipulating the inputs on the webpage to get images and store them for human analysis. Create national weather synopsis. This is used for national level management briefings for fuels/weather conditions forecasts in regard to wildland fire and all hazard response, planning and resource allocation. | Essentially it would allow SME's more time to interpret and create products instead of spending hours gathering them every day. | Images and information from various sites after manipulating the inputs to get them. organizing them and saving them for interpretation. | FALSE | Images and information from various sites after manipulating the inputs to get them. organizing them and saving them for interpretation. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BLM | DOI-0054 | RUDD AI Development of a Small Language Model for Land Use Planning Decisions Consolidation | Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Consolidate Resource Management Planning Documents including but not limited to Resource Management Plans and Amendments to create a "current" decisions outputs. | Quickly identify what decisions apply in a land use planning area. | Document and Database outputs of current land use planning decisions in a Report Format | FALSE | Document and Database outputs of current land use planning decisions in a Report Format | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BLM | DOI-0053 | Assessment Inventory and Monitoring (AIM) Chatbot | Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AIM program provides access to data from 73,000+ monitoring locations and offers tools, guidance, and workflows to support BLM staff in structured, repeatable data collection and application. While resources enhance decision-making, their volume makes discovery and use challenging, often requiring AIM team support. To streamline access, we propose a Retrieval-Augmented Generation (RAG) chatbot that uses AI to summarize documents and tools via metadata and embedded PDFs, enabling natural language queries. This will help users quickly identify and apply resources relevant to their workflows, reduce manual search time, and improve access to credible scientific evidence for decision-making. The chatbot will also free AIM staff to focus on advanced analysis by handling basic inquiries, increasing efficiency and capacity for both users and the Bureau. | Data collected under the AIM program are made available for both internal BLM use in decision making and public consumption. | Text-based recommendations based on internal guidance documents, monitoring methodologies, and example uses case pertaining to monitoring. The outputs direct the users that the AI responses are strictly for information gathering purposes. | FALSE | Text-based recommendations based on internal guidance documents, monitoring methodologies, and example uses case pertaining to monitoring. The outputs direct the users that the AI responses are strictly for information gathering purposes. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BLM | DOI-0051 | NEPA Document Generation | Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Utilize generative AI to generate content for NEPA documentation. Will gather information from ePlanning, and with the link from other systems will be able to draw the project information from that system into ePlanning and then into a CE. We have the templates mapped with about 35% of the information coming from either ePlanning or ePlanning/MLRS. | Faster processing and writing of NEPA documents, which can often be time intensive. | content for NEPA documents. | FALSE | content for NEPA documents. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BLM | DOI-0050 | GraphRAG Comment Analysis and Management Solution | Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Comment analysis - identifying and binning comments from the public. Will intake comments from a variety of sources and AI then breaks them down by issue area. Anticipate training that model with NEPA projects, rulemaking, and public comments on a draft document. | Speed of comment response. Handle the responses and consolidate responses. | Analysis, binning, categorization, and responses. | FALSE | Analysis, binning, categorization, and responses. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BLM | DOI-0049 | User guides/training provided in chat form | Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | We have multiple systems with multiple user guides that are 200 plus pages. Additionally, we have a significant amount of training material. Our customer survey responses always identify that they are not usable because they are so dense and they are often needed in the field, i.e., they are too cumbersome to download and read on tablets and phones. I would like them to be available to the AI as source material so that our users can ask specific questions. I want to do this without having to rewrite and format the PDF documents. The applications are only for internal users, and the documentation is not considered sensitive. If this use case works, I have other documentation I would like to also consider in the future. | We have 10 applications, each with user guides that range from minimal to over 200 pages. This will create efficiency by helping the staff finding relevant information when needed. This should reduce the amount of training material and user guides created | Recommendation for the business user on how to accomplish their tasks in an efficient and accurate way. | FALSE | Recommendation for the business user on how to accomplish their tasks in an efficient and accurate way. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BOR | DOI-0048 | Using Machine Learning to Automate Crack Mapping and Structural Health Monitoring | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Monitoring infrastructure health, particularly crack mapping in large concrete structures is highly time consuming and can be subject to the individual performing the work. | Automated crack mapping via machine learning trained on high-resolution imagery would increase the efficiency of this task and make the result more consistent. | A workflow and method to automate crack mapping in large infrastructure imagery. | FALSE | A workflow and method to automate crack mapping in large infrastructure imagery. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BOR | DOI-0047 | Seasonal Water Supply Forecasting: Pyforecast [2024 INV#DOI-69] | Pilot The use case has been deployed in a limited test or pilot capacity. | Energy & the Environment | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Reclamation utilizes water supply forecasts (cumulative expected inflow over multiple months) to inform reservoir operations and water allocation decisions. | More skillful forecasts and internal forecasting tools support improved reservoir operations. | Pyforecast tool is available to Reclamation reservoir operators, allowing them to produce in-house forecasts between forecasts issued by other agencies and to explore forecast sensitivity to different scenarios. | 01/01/2021 | Developed in-house | FALSE | Pyforecast tool is available to Reclamation reservoir operators, allowing them to produce in-house forecasts between forecasts issued by other agencies and to explore forecast sensitivity to different scenarios. | Uses a variety of public data | FALSE | None of the Above | FALSE | https://github.com/DOI-BOR/PyForecast | |||||||||||||
| Department Of The Interior | BOR | DOI-0046 | Piloting Machine Learning Inflow Forecasts Across Reclamation [2024 INV#DOI-58] | Pilot The use case has been deployed in a limited test or pilot capacity. | Energy & the Environment | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Reclamation utilizes streamflow forecasts to inform reservoir operations. | More skillful forecasts enable improved reservoir operations, yielding benefits such as more hydropower generation, increased water deliveries, and more effective flood risk management. | Reservoir operators have access to machine-learning based streamflow forecasts that were shown to be more skillful than other available models and forecast products. | Purchased from a vendor | Upstream Tech | FALSE | Reservoir operators have access to machine-learning based streamflow forecasts that were shown to be more skillful than other available models and forecast products. | Streamflow forecasts | https://www.upstream.tech/hydroforecast | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | BOR | DOI-0045 | Machine Learning Refines Quagga Habitat Suitability [2024 INV#DOI-74 (NEW)] | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Invasive mussels pose a number of challenges for the continuous operation of water infrastructure. | Understanding habitat suitability can inform management actions. | Using AI/ML methods, habitat variables were identified that may be limiting mussel establishment at reservoirs and may be influencing sudden declines in mussel populations at infested reservoirs. | FALSE | Using AI/ML methods, habitat variables were identified that may be limiting mussel establishment at reservoirs and may be influencing sudden declines in mussel populations at infested reservoirs. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BOR | DOI-0044 | Machine Learning Applied to Geotechnical Engineering: Statistical Methods Applied to Seismic Analysis [2024 INV#DOI-71 (NEW)] | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The process for determining the seismic risk for dams is time intensive. | Leveraging machine learning, high-level facility seismic risk screening can be conducted more efficiently. | A method that improves efficiency and consistency of risk assessments. | FALSE | A method that improves efficiency and consistency of risk assessments. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BOR | DOI-0043 | Machine Learning for Chemical Savings at Reverse Osmosis Plants [2024 INV#DOI-73 (NEW)] | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Water treatment processes are often complex and maintenance activities present tradeoffs between cost and system performance. | This project aims to use machine learning to optimize the usage of membrane cleaning chemicals in a water treatment plant using a reverse osmosis process. This would lower the unit cost of water produced. | A tool to inform plant operations as to best time to clean/perform maintenance rather than a fixed schedule or threshold. | FALSE | A tool to inform plant operations as to best time to clean/perform maintenance rather than a fixed schedule or threshold. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BOR | DOI-0041 | Improved Processing and Analysis of Test and Operating Data from Rotating Machines [2024 INV#DOI-62] | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This research strives to aid in the development of condition-based maintenance (CBM) and predictive maintenance (PdM) tools for Hydroelectric Facilities (Generators and Pumps) by exploring, testing, and developing software tools to process data collected from rotating machines. These software tools use various forms of AI/ML. | CBM and PdM aim to increase hydropower generation by reducing outages for maintenance and lower O&M costs by only performing maintenance when needed. | Data-driven tools using AI/ML techniques to inform power plant maintenance. | Developed in-house | FALSE | Data-driven tools using AI/ML techniques to inform power plant maintenance. | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | FWS | DOI-0040 | DocketScope | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Facilitate review of public comments. | Use AI capabilities and detections to assist with timely and accurate comment analysis. | Produce extractive summaries of comments. Detect general sentiment and requests for comment period extensions or public meeting requests. AI tools are supplemental and intended to aid in efficiently identifying key information within comments. Results are verified by analysts during comment review. | 08/04/2025 | Purchased from a vendor | The Regulatory Group, Inc. | TRUE | Produce extractive summaries of comments. Detect general sentiment and requests for comment period extensions or public meeting requests. AI tools are supplemental and intended to aid in efficiently identifying key information within comments. Results are verified by analysts during comment review. | Comments | FALSE | None of the Above | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | ||||||||
| Department Of The Interior | USGS | DOI-0039 | Geothermal Energy Assessments | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Executive Order 14156 Declaring a National Energy Emergency, directs federal agencies to identify and develop our Nations energy resources. | New methodology developed by the USGS leverages AI and advanced machine learning (ML) techniques to accelerate how we assess geothermal energy potential and brings new speed and precision to our science to unleash American energy. | The USGS is developing data-driven machine learning strategies (a form of artificial intelligence) to produce new maps of hydrothermal resource favorability for the Great Basin in the western US as part of our geothermal resource estimation efforts and to map geologic features and transitions using EarthMRI datasets in support of energy and minerals resource assessments. | FALSE | The USGS is developing data-driven machine learning strategies (a form of artificial intelligence) to produce new maps of hydrothermal resource favorability for the Great Basin in the western US as part of our geothermal resource estimation efforts and to map geologic features and transitions using EarthMRI datasets in support of energy and minerals resource assessments. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0038 | Lithium Potential Assessments | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identifying sources of lithium is critical to our Nations economic prosperity and security. | The USGS has used machine learning to find new domestic sources of critical minerals including lithium. | USGS used AI to identify the large lithium potential of the Smackover Formation in Arkansas, which could unlock a new source of domestic supply. | FALSE | USGS used AI to identify the large lithium potential of the Smackover Formation in Arkansas, which could unlock a new source of domestic supply. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0037 | Mineral Resource Assessments | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Mineral resource assessments require a significant amount of data evaluation and processing. | AI-assisted methods are used to modernize assessment methodologies and capacity, allowing the USGS to meet the increasing demand for critical mineral assessments more efficiently. | Increased efficiency in generating mineral resource assessments. | FALSE | Increased efficiency in generating mineral resource assessments. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0036 | Water Time-Series Record Automation Framework development | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Leverage AI/ML to streamline USGS time-series data processing workflows | The AI/ML algorithms would be leveraged to help streamline data correction processes associated with time-series data record production in extensible software automation solutions. | The system will automatically apply, or suggest application of, corrections that will be layered on the raw time-series data to compute a final corrected time-series dataset. | FALSE | The system will automatically apply, or suggest application of, corrections that will be layered on the raw time-series data to compute a final corrected time-series dataset. | FALSE | FALSE | Yes by another appropriate agency office or reviewer not directly involved in the AIs development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Establishment of an appropriate appeal process is in-progress | |||||||||||||
| Department Of The Interior | USGS | DOI-0035 | Using advanced computing techniques for mobile monitoring platforms [2024 INV#WO0000000109492] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Support the navigation and swarming capabilities of autonomous vehicle platforms for water monitoring | Improved spatial data collection on water quality and quantity | Spatial water data | FALSE | Spatial water data | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0034 | Using advanced computing techniques for image-based monitoring [2024 INV#WO0000000109674] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Provide tools and methods for leveraging image-based monitoring and machine learning approaches for measuring surface water properties. | USGS Water Mission Area improvements in operational efficiencies and reduction in safety incidents with application of image-based monitoring techniques and methods. Enhance data portfolio and provide new data types to stakeholders at reduced cost | Auto-labeled imagery for use in training ML models; Pre-trained models for determining water level elevation; Enhanced video files for computation of surface velocities; Auto-generated ice-affected data qualifiers | Developed in-house | FALSE | Auto-labeled imagery for use in training ML models; Pre-trained models for determining water level elevation; Enhanced video files for computation of surface velocities; Auto-generated ice-affected data qualifiers | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | FWS | DOI-0033 | AI for Offshore Energy: Innovative Technology Streamlines Marine Wildlife Surveys [2024 INV#DOI-66] | Pilot The use case has been deployed in a limited test or pilot capacity. | Energy & the Environment | Pilot | c) Not high-impact | Not high-impact | Computer Vision | Offshore energy development requires environmental assessments; using AI technology to conduct offshore migratory bird counts can efficiently provide such data, however, substantial automation is required | Counts of migratory birds with a high degree of accuracy, long-term cost efficiency, safety is improved for pilots due to remote sensing technology | Abundance and distribution data on migratory birds | Developed with both contracting and in-house resources | Quantaero | FALSE | Abundance and distribution data on migratory birds | Remote sensing data are acquired by FWS aircraft and camera systems. | FALSE | None of the Above | TRUE | https://github.com/USFWS/AI-for-USFWS-Migratory-Birds | |||||||||||||
| Department Of The Interior | USGS | DOI-0032 | CONUS EcoFlows Planning & Prototype [2024 INV#WO0000000109732] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | National-scale ecological-flow response models | Predictions of flow conditions of sites with anthropogenic influences where biological data exists to meet agency mission of assessing water availability | Dataset of benchmark estimates of unaltered flow conditions used in later ecological modeling for water availability | FALSE | Dataset of benchmark estimates of unaltered flow conditions used in later ecological modeling for water availability | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0031 | Nutrient, Salinity, sediment, temperature, and drought model development | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Simulate nutrients (phosphorus and nitrate), temperature, sediment, and salinity in streams across the U.S. | Includes prototype streamflow drought forecasts using data-driven, machine learning approaches for USGS gage locations across continental U.S. Provides model results on water quality aspects to meet agency mission of assessing water availability | Digital datasets of nutrients (phosphorus and nitrate), temperature, sediment, and salinity in streams across the U.S. and prototype streamflow drought forecasts | FALSE | Digital datasets of nutrients (phosphorus and nitrate), temperature, sediment, and salinity in streams across the U.S. and prototype streamflow drought forecasts | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0030 | Water Use Model Development [2024 INV#WO0000000109669] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Estimate multiple categories of water use across the U.S. | Outcome is quantified water demand across the U.S. to meet agency mission of assessing water availability | Digital datasets of estimates of water use | Developed in-house | FALSE | Digital datasets of estimates of water use | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | FWS | DOI-0029 | AI-Powered Wildlife Monitoring on National Wildlife Refuges | Pilot The use case has been deployed in a limited test or pilot capacity. | Service Delivery | Pilot | c) Not high-impact | Not high-impact | Computer Vision | For whooping crane, traditional survey approach is to estimate population size by conducting 6 repeated surveys due to limits of human observer surveys. Surveys of waterfowl are experimental and looking to improve the accuracy of surveys and to obtain improved measures of population sizes. | Cost efficiency and improved data quality. | Counts of migratory birds. For waterfowl, data can inform harvest management and provide spatially explicit abundance estimates. For whooping crane, goal is to monitor population trends. | Developed in-house | FALSE | Counts of migratory birds. For waterfowl, data can inform harvest management and provide spatially explicit abundance estimates. For whooping crane, goal is to monitor population trends. | Imagery data is acquired by FWS aircraft and camera system | FALSE | None of the Above | TRUE | https://github.com/USFWS/AI-for-WHCR | ||||||||||||||
| Department Of The Interior | FWS | DOI-0028 | Implementation of Artificial Intelligence to Estimate Population Size of a Migratory Bird from Aerial Thermal Imagery [2024 INV#DOI-67] | Pilot The use case has been deployed in a limited test or pilot capacity. | Service Delivery | Pilot | c) Not high-impact | Not high-impact | Computer Vision | Current survey approach has excessive variability that cannot be biologically explained. New approach is warranted to inform harvest management | Improved safety for pilots and crew because of remote sensing methods, potential for cost efficiency if approach is fully implemented, and improves data quality to inform harvest management. | A population estimate and distribution data for this hunted migratory bird. | Developed in-house | FALSE | A population estimate and distribution data for this hunted migratory bird. | Imagery data acquired by FWS aerial photography | FALSE | None of the Above | TRUE | https://github.com/USFWS/AI-for-SACR | ||||||||||||||
| Department Of The Interior | FWS | DOI-0027 | Atlas.ti (Qualitative Data Analysis Software) | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Existing software, such as Excel, was insufficient for the increasing quantity of qualitative data collected, necessitating the need for specialized software geared towards qualitative data analyses. | The software increases the efficiency of qualitative data analyses and could automate labor-intensive tasks. A qualified social scientist is involved in the final analysis and insights to ensure accuracy and compliance with regulatory standards. | There are currently no or limited AI outputs as those features are not currently supported. However, future outputs could include automated assessment of themes, sentiment analysis, and summarization of responses. | FALSE | There are currently no or limited AI outputs as those features are not currently supported. However, future outputs could include automated assessment of themes, sentiment analysis, and summarization of responses. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | SOL | DOI-0025 | FOIA Record Pre-Review Tool | Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Records review is a the resource intensive step of FOIA request processing that would most benefit from AI assistance and optimization. | Improve the speed and accuracy of records review. | Spreadsheet of "Hits" or items that will be uploaded to the document review platform to queue reviewers attention to information that may fall under a FOIA or Privacy Act Exemption. | FALSE | Spreadsheet of "Hits" or items that will be uploaded to the document review platform to queue reviewers attention to information that may fall under a FOIA or Privacy Act Exemption. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | SOL | DOI-0024 | Document Review Platform Generative AI | Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | FOIA Request Processing is the resource intensive step of the process that would benefit from AI assistance and optimization. | Potential benefits include increasing the speed and accuracy of document review. | TBD | FALSE | TBD | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | SOL | DOI-0021 | Microsoft eDiscovery Attorney-Client Privilege Detection | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identifying information subject to a FOIA exemption. | Improving speed and accuracy in identifying information that may fall under a FOIA exemption. This tool has recently been activated and we are gathering data to see if it is useful in FOIA operations | The model provides an attorney-client privilege score and indicates if an attorney is a participant in the document. | 09/01/2025 | Purchased from a vendor | Microsoft | FALSE | The model provides an attorney-client privilege score and indicates if an attorney is a participant in the document. | Resident in Microsoft eDiscovery for all files. | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | SOL | DOI-0020 | FOIA Request Lexical Similarity Tool in the Document Review Platform (Term Frequency Inverse Document Frequency (TF-IDF) - Cosine) | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Identifying Similar FOIA Requests | Identifying requests that are nearly identical assists in reducing the duplication of work. Records that are identical or nearly identical can be deduplicated out or processed in bulk. | Percentage similarity between documents. Developed in house selected because this is not an out of the box function available in the document review platform. | 08/01/2023 | Developed in-house | FALSE | Percentage similarity between documents. Developed in house selected because this is not an out of the box function available in the document review platform. | FOIA Requests | FALSE | None of the Above | TRUE | ||||||||||||||
| Department Of The Interior | SOL | DOI-0019 | Enabling FOIA Request Clustering Capability in the Document Review Platform (Density Based Algorithm alongside Term Frequency Inverse Document Frequency (TF-IDF)) | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Identifying Contextually Similar FOIA Requests. Developed in house is selected because while the document review platform is used it is not an out of the box functionality. | This tool assists in identifying contextually similar FOIA requests but has some limitations such as editing stop words. While we still use this tool sometimes, we primarily use our in-house built tool. | Grouped requests into clusters based on TF-IDF weighted vocabulary. Documents with distinctive overlapping terms form conceptual groups. This provides a visualization of requests that are conceptually similar. The clusters assist in identifying emerging topics of public interest, opportunities where proactive disclosure may fulfill multiple requests, and opportunities for collaboration on responses across multiple bureaus. While we primarily use our own in-house built tool for this capability, there are times when we still leverage this version. | 08/01/2023 | Developed in-house | FALSE | Grouped requests into clusters based on TF-IDF weighted vocabulary. Documents with distinctive overlapping terms form conceptual groups. This provides a visualization of requests that are conceptually similar. The clusters assist in identifying emerging topics of public interest, opportunities where proactive disclosure may fulfill multiple requests, and opportunities for collaboration on responses across multiple bureaus. While we primarily use our own in-house built tool for this capability, there are times when we still leverage this version. | FOIA requests | FALSE | None of the Above | TRUE | ||||||||||||||
| Department Of The Interior | SOL | DOI-0018 | FOIA Request Clustering Tool (Embedding based clustering alongside Term Frequency Inverse Document Frequency (TF-IDF)) | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Identifying FOIA requests that share lexical and/or semantic similarity. | Quickly identifying similar FOIA requests. | Cluster Visualization to facilitate identifying similar requests. | 11/03/2023 | Developed in-house | FALSE | Cluster Visualization to facilitate identifying similar requests. | FOIA Requests | FALSE | None of the Above | TRUE | ||||||||||||||
| Department Of The Interior | SOL | DOI-0017 | FOIA Request Conceptual Similarity Tool (Semantic Similarity Score Generation) | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Identifying contextually similar FOIA requests received by the department. | Reduces duplication of work when similar requests are submitted to multiple offices. These requests can be coordinated for uniform responses. | Semantic Similarity Score | 11/03/2023 | Developed in-house | FALSE | Semantic Similarity Score | FOIA Requests | FALSE | None of the Above | TRUE | ||||||||||||||
| Department Of The Interior | SOL | DOI-0016 | Performance Modeling / Performance Forecasting | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Estimating the number or requests that can be processed by a FOIA processor or bureau each year. | Expected performance. Identifying important variables. Assists in resource allocation decisions. | Expected number of requests to be processed. | Developed in-house | FALSE | Expected number of requests to be processed. | FOIA Annual Report Data | https://www.foia.gov/data.html | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | FWS | DOI-0015 | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Other | Supports research and analysis using publicly available information to assist agency mission needs. | The tool streamlines the process of gathering open-source data from a variety of online sources to improve operational efficiency. | 03/01/2024 | Purchased from a vendor | TRUE | The tool streamlines the process of gathering open-source data from a variety of online sources to improve operational efficiency. | Social media accounts | FALSE | None of the Above | FALSE | ||||||||||||||||
| Department Of The Interior | FWS | DOI-0014 | AI data harvesting project ECOSphere [2024 INV#DOI-62] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Other | How can we automate the extraction, classification, and transformation of heterogeneous species-related files (e.g., PDFs, Word docs, Excel sheets, images) uploaded by biologists into structured formats compatible with the ETK system, while ensuring accuracy, scalability, and minimal manual intervention? | The data ingestion process will be automated, enabling the system to extract, classify, and transform data from various file types without human intervention. Species data will become available more quickly, allowing for faster decision-making. | The pilot will deliver an AI-powered system that automatically extracts and transforms species-related data from uploaded files into ETKs required format. It will also produce trained models, a functioning QA feedback loop, and a final report evaluating performance and scalability. | Developed in-house | FALSE | The pilot will deliver an AI-powered system that automatically extracts and transforms species-related data from uploaded files into ETKs required format. It will also produce trained models, a functioning QA feedback loop, and a final report evaluating performance and scalability. | Species Data | FALSE | None of the Above | FALSE | |||||||||||||||
| Department Of The Interior | USGS | DOI-0013 | Comprehensive Condition Assessment | Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Find deficiencies in our real property assets and to formulate reports from field and machine gathered data for the purpose of making a deferred maintenance backlog and for capitalized planning on real property assets in the USGS | Saving staff time, travel costs, and will speed report creation to address maintenance needs in a more timely manner | maintenance logs and reports | FALSE | maintenance logs and reports | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | FWS | DOI-0012 | ePermits: Chatbot Implementation and Permit Application Wizard Tools | Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Users face significant difficulties when interacting with the system due to a lack of personalized experiences, delayed and imprecise query responses, poor application form guidance, and an overall unintuitive user interfaceresulting in reduced efficiency, increased errors, and lower user satisfaction. | Enhance functionality, improve user experience, assist in acceleration of ePermits processing and align with modern technological and security standards to support mission of conserving, protecting, and enhancing fish, wildlife, plants, and their habitats | Natural Language Responses, Intent Recognition & Routing, Dynamic Form Assistance, Knowledge Base Integration, Contextual Follow-ups, Escalation to Human Agent, Multimodal Outputs. We do not use agentic AI systems in our permit processing workflows. All AI-assisted recommendations are subject to human review and approval. A qualified human reviewer will always be involved in the final decision-making process to ensure accuracy, accountability, and compliance with regulatory standards. | FALSE | Natural Language Responses, Intent Recognition & Routing, Dynamic Form Assistance, Knowledge Base Integration, Contextual Follow-ups, Escalation to Human Agent, Multimodal Outputs. We do not use agentic AI systems in our permit processing workflows. All AI-assisted recommendations are subject to human review and approval. A qualified human reviewer will always be involved in the final decision-making process to ensure accuracy, accountability, and compliance with regulatory standards. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | FWS | DOI-0011 | ePermits: Cognitive Search Capability | Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Current native search capabilities don't offer semantic search features that understand user intent and content meaning. | Enhance functionality, improve user experience, assist in acceleration of ePermits processing and align with modern technological and security standards to support mission of conserving, protecting, and enhancing fish, wildlife, plants, and their habitats | Semantic Search Results, Enriched Metadata, Document Summarization, Faceted Navigation, Natural Language Q&A, Contextual Recommendations, Role-Based Personalization. AI-assisted recommendations are subject to human review and approval. No permitting decision is made based on AI results alone. A qualified human reviewer will always be involved in the final decision-making process to ensure accuracy, accountability, and compliance with regulatory standards.We do not use agentic AI systems in our permit processing workflows. All AI-assisted recommendations are subject to human review and approval. A qualified human reviewer will always be involved in the final decision-making process to ensure accuracy, accountability, and compliance with regulatory standards. | FALSE | Semantic Search Results, Enriched Metadata, Document Summarization, Faceted Navigation, Natural Language Q&A, Contextual Recommendations, Role-Based Personalization. AI-assisted recommendations are subject to human review and approval. No permitting decision is made based on AI results alone. A qualified human reviewer will always be involved in the final decision-making process to ensure accuracy, accountability, and compliance with regulatory standards.We do not use agentic AI systems in our permit processing workflows. All AI-assisted recommendations are subject to human review and approval. A qualified human reviewer will always be involved in the final decision-making process to ensure accuracy, accountability, and compliance with regulatory standards. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | OIG | DOI-0010 | Anomaly Detection for Financial Transactions Related to DOI Programs | Pilot The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Use traditional machine analytics with difficult-to-manually-analyze financial data for DOI programs to create analytical methods that are scalable and repeatable. Additionally, the analytics must be aligned with quality standards required for OIG audits and investigations. | It allows OIG to identify anomalies or trends that would be undetectable via manual analysis or would require endless IT and staff resources to manually process and analyze information. | Outputs will identify anomalous transactions and will be provided to auditors or law enforcement analysts and agents to further review and/or action. | 04/01/2025 | Developed in-house | TRUE | Outputs will identify anomalous transactions and will be provided to auditors or law enforcement analysts and agents to further review and/or action. | Financial Transaction Data related to DOI Programs | FALSE | Other | TRUE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0008 | DiscoverDOI | Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Agentic AI | Users need to find outdated content or prohibited content on DOI websites | Users can ask questions and receive answers and links to the websites in question | Links and content snippets of DOI websites | 06/02/2025 | Developed in-house | FALSE | Links and content snippets of DOI websites | public DOI websites | FALSE | None of the Above | TRUE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0007 | Effects of vehicle traffic on space use and road crossings of caribou in the Arctic [2024 INV#WO0000000110111] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Assessing the effects of industrial development on wildlife is a key objective of managers and conservation practitioners. However, wildlife responses are often only investigated with respect to the footprint of infrastructure, even though human activity can strongly mediate development impacts. In Arctic Alaska, there is substantial interest in expanding energy development, raising concerns about the potential effects on barren-ground caribou (Rangifer tarandus granti). While caribou generally avoid industrial infrastructure, little is known about the role of human activity in moderating their responses, and whether managing activity levels could minimize development effects. To address this uncertainty, we examined the influence of traffic volume on caribou summer space use and road crossings in the Central Arctic Herd within the Kuparuk and Milne Point oil fields on the North Slope of Alaska. | gradient-boosted machine learning models to predict hourly traffic volumes for road segments across the study and generalized additive models (GAMs) to assess effects of traffic volume on caribou fine-scale summer movements | Prediction of liquefaction potential at input site, as it compares to existing case history dataset. | 10/03/2025 | Developed in-house | FALSE | Prediction of liquefaction potential at input site, as it compares to existing case history dataset. | This use case is being developed on existing enterprise data and analytics platforms within the agency rather than procuring additional platforms or SaaS to operate. | https://doi.org/10.5066/P9HXW3N5 | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | ASIA | DOI-0006 | IA Tribal Consultation Clearinghouse | Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The Tribal Consultation Clearinghouse is envisioned by Indian Affairs (business owner) as a centralized, user-friendly webpage that enhances the accessibility of Tribal consultation information across the federal government. | As a federal website, the Tribal Consultation Clearinghouse will be designed in alignment with the BIAs existing approved website template, adhering to federal web design standards to ensure consistency, accessibility, and security. | There is now being discussed a federal one-stop shop for Tribal Governments. If this is realized, federal DTLLs would most likely be modified into a unified format, and the Clearinghouse would take on added importance: as a deconfliction tool | FALSE | There is now being discussed a federal one-stop shop for Tribal Governments. If this is realized, federal DTLLs would most likely be modified into a unified format, and the Clearinghouse would take on added importance: as a deconfliction tool | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BIA | DOI-0005 | BIA-GEO Assist | Pilot The use case has been deployed in a limited test or pilot capacity. | Service Delivery | Pilot | c) Not high-impact | Not high-impact | Generative AI | The Branch of Geospatial Support (BOGS) and Regional Geospatial Coordinators (RGCs) offer technical support to BIA and Tribal GIS users that participate in the DOI Enterprise Licensing Agreement (ELA). The BOGS ELA Helpdesk Coordinator and RGCs allocates several hours per week to address helpdesk submissions that are rudimentary in nature and/or concerning licensing offered and the process and requirements for participation in the ELA. RGCs allocate significant amounts of their time to the creation of Map layouts to provide data products to BIA, stakeholders and Tribes. We would like to expand the use of this tool to see if it can be customized to reference data sets maintained by DRIS to automate the creation of these maps in a predefined template to eliminate the workload of map creation from existing data. This tool also has the potential to assist with responding to FOIA requests, Our Division answers or collaborates on numerous FOIA requests each year. I would like to develop an internal version that could scour a FOIA repository to determine if this request is a duplication of a previous request and have it provide the dataset that was provided as a response. | Reducing the number of basic questions concerning rudimentary processes and/or general licensing and eligibility question will reduce the number of hours spent on helpdesk tickets and allow the RGCs ELA helpdesk coordinator to focus more time on managing | Answers to user questions are generated by summarizing the results obtained from web queries that reference the predetermined web references built into the custom HTML code that was used to create the tool. However, we would like to explore expanding the tools capabilities to query internal data sources to automate the production of GIS products based on pre-approved public data. We would also like to utilize this tools functionality to scour past FOIA requests and their associated responses to determine if a duplicate request has been received and provide the dataset or decision made for the previous FOIA request or a determination that the current request is unique in nature. | Developed in-house | FALSE | Answers to user questions are generated by summarizing the results obtained from web queries that reference the predetermined web references built into the custom HTML code that was used to create the tool. However, we would like to explore expanding the tools capabilities to query internal data sources to automate the production of GIS products based on pre-approved public data. We would also like to utilize this tools functionality to scour past FOIA requests and their associated responses to determine if a duplicate request has been received and provide the dataset or decision made for the previous FOIA request or a determination that the current request is unique in nature. | none | FALSE | None of the Above | FALSE | |||||||||||||||
| Department Of The Interior | ASIA | DOI-0004 | Automate Project Execution of FI&R (Facilities Improvement and Repair) Projects | Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Strategically prioritize Facilities Improvement and Repair (FI&R) projects based on a set of criteria developed in concert with policy, customer need, and capacity. Project funding based on compliance with policy and funding parameters. Process based on an objective and holistic review of IA-wide priorities. Aggregate data across systems and apply analysis to develop a schedule, reporting, identify dependencies, and develop notifications. | Strategic use of funds, allocation of multiple needs (projects), transparency of status, identification of dependency on external factors. | Data based decision making, streamlined project execution, reduce risk, cost optimization, improved stakeholder relationship, better tracking and reporting Structured decision making (policy/data), risk management, improved compliance | FALSE | Data based decision making, streamlined project execution, reduce risk, cost optimization, improved stakeholder relationship, better tracking and reporting Structured decision making (policy/data), risk management, improved compliance | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BTFA | DOI-0002 | Non-Generative AI use for Trust Information Analysis and Reporting Tool [2024 INV#WO0000000110516] | Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Computer Vision | When records are scanned, both front and back pages are captured, even though back pages are generally blank. For Department of Justice litigation or other records research and discovery, this doubles the page count, making manual review unnecessarily time-consuming and resource-intensive. Automating blank page detection with Non-Generative AI (e.g., Azure Computer Vision and Document Intelligence Read) helps streamline this process and reduce burden. | Staff no longer need to inspect and remove every blank page. Fewer pages to review means lower costs for records storage and contract/labor expenses. Enables BTFA to respond more quickly to DOJ litigation requests by streamlining document preparation. | The output provides the final documents with the blank pages removed. | Purchased from a vendor | Microsoft Azure | FALSE | The output provides the final documents with the blank pages removed. | Electronic Documents | TRUE | https://www.doi.gov/privacy/btfa_pia | None of the Above | FALSE | https://www.doi.gov/privacy/btfa_pia | ||||||||||||
| Department Of The Interior | BTFA | DOI-0001 | Integration of AI, specifically CoPilot for GitHub, for BTFA Software Development Life Cycle Process for DME and O&M applications. | Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | Reductions in BTFA Application and Database Staff has lead to increase workload for remaining staff for DME and O&M activities of applications and servers. This code is considered sophisticated in nature, code resides in Azure App Workspace (Web/Function/Logic Apps) and standalone applications and servers. | Due to reduce staffing, BTFA needs to integrate AI for its DME and O&M of applications and servers (database/application/web/etc) which has sophisticated code base. AI has show promise in reducing staff hours for all phases of SDLC. | changes in source code after review by application/dba staff along with appropriate testing before going to production. | 10/01/2025 | Purchased from a vendor | Microsoft | FALSE | changes in source code after review by application/dba staff along with appropriate testing before going to production. | No data is used | FALSE | None of the Above | FALSE | https://github.com/features/copilot | ||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 1 | OCC.Chat | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | Not high-impact | Not high-impact | Generic AI tool set that will serve basic needs but not specific needs to support critical agency function, however this might be foundation for other use cases. | Generative AI | GenAI chatbot that generates human-like responses to user questions using public knowledge for reference | The AI improves operational efficiency by quickly delivering relevant information, which reduces the manual burden on staff and enhances overall productivity. For the general public, it enhances service delivery by providing instant and consistent responses to their queries, thereby improving user satisfaction and engagement. | The system generates human-like responses to user questions, drawing from public knowledge databases for accurate reference. The outputs include text-based answers that are contextually appropriate, informative, and easy to understand. Additionally, the system logs interactions to help analyze and improve future responses and identify common inquiry trends. | 07/01/2024 | Developed in-house | Yes | The system generates human-like responses to user questions, drawing from public knowledge databases for accurate reference. The outputs include text-based answers that are contextually appropriate, informative, and easy to understand. Additionally, the system logs interactions to help analyze and improve future responses and identify common inquiry trends. | No agency data is used to train the model | No | None of the Above | Yes | |||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 2 | OCC.InfoAssist | Pilot - The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | Not high-impact | Not high-impact | Generic AI tool set that will serve basic needs but not specific needs to support critical agency function, however this might be foundation for other use cases. | Generative AI | GenAI chatbot that generates human-like responses to user questions using OCC Comptroller handbooks and PPMs documents | The AI improves operational efficiency by automating the response process, thereby reducing the manual workload on staff. It ensures that users receive consistent and accurate information quickly, which enhances user satisfaction and supports the agency’s mission of providing timely access to critical information. This leads to improved service delivery and better-informed decision-making across the agency. | The system generates human-like responses to user questions, specifically referencing content from OCC Comptroller handbooks and PPMs documents. The outputs include detailed, contextually relevant text-based answers that help users effectively address their inquiries. Additionally, the system logs interactions for further review, facilitating continuous improvement of response accuracy and relevance. | 07/01/2024 | Developed in-house | Yes | The system generates human-like responses to user questions, specifically referencing content from OCC Comptroller handbooks and PPMs documents. The outputs include detailed, contextually relevant text-based answers that help users effectively address their inquiries. Additionally, the system logs interactions for further review, facilitating continuous improvement of response accuracy and relevance. | No agency data is used to train the model | No | None of the Above | Yes | |||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 3 | Rulemaking Comment Analytics | Pre-Deployment - The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | Not high-impact | Not high-impact | The AI-driven comment analytics tool does not make or directly influence binding policy or regulatory decisions; it only synthesizes and summarizes public input for human review. Because its outputs do not determine individuals’ rights, benefits, services, or access to government resources, it does not meet the definition of a High-Impact AI use case. | Generative AI | The rulemaking process generates large volumes of public comments that require timely and consistent analysis. Manual review is labor-intensive, slow, and prone to variability, creating delays in identifying sentiment, themes, and stakeholder concerns. We need a more efficient, scalable method to analyze these comments so regulators can make informed decisions within required timeframes. | The Comment Analytics use case leverages Gen AI to rapidly analyze public comments submitted during rulemaking, including identifying sentiment, key themes, and areas of concern. This significantly speeds up the review process, reduces manual workload, and provides consistent, data-driven insights to support more informed regulatory decision-making. | The AI tool will produce structured summaries of public comments, including sentiment analysis, key themes, and recurring issues. It's output will provide regulators with a consolidated view of stakeholder feedback, enabling faster interpretation, prioritization of concerns, and more informed decision-making during the rulemaking process. | The AI tool will produce structured summaries of public comments, including sentiment analysis, key themes, and recurring issues. It's output will provide regulators with a consolidated view of stakeholder feedback, enabling faster interpretation, prioritization of concerns, and more informed decision-making during the rulemaking process. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 5 | OCC Guidance Summarization | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | This use case does not make or influence decisions that affect rights, access to services, safety, or any other outcomes defined in the high-impact criteria, and is limited to routine operational support. | Generative AI | OCC staff must frequently review a wide range of OCC regulatory sources—Handbooks, Bulletins, statutes, old Circulars, Interpretive Letters, Corporate Actions, and related guidance—to understand requirements and develop supervisory positions. Manually locating, reading, and synthesizing these materials is time-consuming and can slow the early stages of analysis. The AI is intended to streamline this initial research by quickly identifying relevant sources and summarizing key regulatory points. | The AI helps staff rapidly gain orientation to applicable regulatory frameworks, reducing the time spent gathering and processing foundational information. This supports the agency’s mission by promoting more timely and consistent application of asset management regulations and supervisory expectations. The public benefits through enhanced oversight of fiduciary and asset management activities, contributing to stronger investor protection and safer banking practices. | The AI will generate targeted summaries of relevant regulatory documents, highlight key obligations and thematic concepts, and provide references to specific OCC guidance sources applicable to the question or issue being researched. Its outputs serve as a starting point for examiners and policy staff, facilitating more efficient and informed review while ensuring that final interpretations and decisions remain with agency experts. | The AI will generate targeted summaries of relevant regulatory documents, highlight key obligations and thematic concepts, and provide references to specific OCC guidance sources applicable to the question or issue being researched. Its outputs serve as a starting point for examiners and policy staff, facilitating more efficient and informed review while ensuring that final interpretations and decisions remain with agency experts. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 6 | Employee Task Organization | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | This use case helps organize work and does not produce decisions that affect rights, benefits, access to services, safety, or any other outcomes defined in the high-impact criteria. | Generative AI | Assigning employees to tasks requires balancing multiple factors such as availability, workload, skill sets, and operational constraints. This process is often manual, time-consuming, and prone to inefficiencies, especially when coordinating across teams or during periods of high activity. The AI is intended to streamline workforce allocation by evaluating these constraints and recommending optimal task assignments, reducing administrative burden on managers. | The AI improves operational efficiency by helping managers make faster, more consistent assignment decisions that align resources with priorities. This supports the agency’s mission by ensuring that work is distributed effectively, reducing bottlenecks, and enabling staff to focus on higher-value activities. Ultimately, the public benefits from more responsive and timely execution of supervisory and regulatory responsibilities. | The AI will generate recommended task assignments for employees based on defined parameters such as availability, workload, required skills, and organizational constraints. Outputs may include suggested staffing plans, workload distribution summaries, and alerts when constraints or conflicts arise. These recommendations serve as decision-support tools for managers, who retain full authority over final assignment decisions. | The AI will generate recommended task assignments for employees based on defined parameters such as availability, workload, required skills, and organizational constraints. Outputs may include suggested staffing plans, workload distribution summaries, and alerts when constraints or conflicts arise. These recommendations serve as decision-support tools for managers, who retain full authority over final assignment decisions. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 7 | Regulation Summarization | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | This use case aids in the interpretation or summarization of regulatory information and does not generate decisions that affect rights, access to services, safety, or any other outcomes defined in the high-impact criteria. | Generative AI | Regulatory updates and final rules associated with the regulatory programs are extensive, complex, and often highly technical. Staff must manually review lengthy documents to extract relevant requirements, interpret changes, and understand impacts on supervisory processes. This manual approach is time-consuming and can delay implementation planning. The AI is intended to streamline this process by efficiently summarizing regulatory text and highlighting key provisions that matter most to examiners and policy staff. | By accelerating comprehension of regulatory materials, the AI helps staff more quickly understand updates and incorporate them into supervisory activities. This supports the agency’s mission by enhancing consistency, improving timeliness of regulatory interpretation, and reducing administrative burden. For the public, clearer and more efficient implementation and can contribute to more consistent enforcement and better support for community investment outcomes. | The AI will produce concise summaries of regulatory sections, highlight significant changes introduced by the Final Rule, and extract obligations, definitions, and compliance considerations relevant to examiners. Its outputs function as a support tool to help staff rapidly orient themselves to regulatory changes while leaving full interpretation and decision-making to agency personnel. | The AI will produce concise summaries of regulatory sections, highlight significant changes introduced by the Final Rule, and extract obligations, definitions, and compliance considerations relevant to examiners. Its outputs function as a support tool to help staff rapidly orient themselves to regulatory changes while leaving full interpretation and decision-making to agency personnel. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 9 | Regulation Statutory and Common Name Indexing | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | This use case does not make decisions or affect anyone’s rights, benefits, access to services, or other areas defined as high-impact—it simply provides information without influencing outcomes. | Generative AI | Compliance OCC staff frequently need to reference regulations by both their alphabetical identifiers (e.g., Regulation Z) and their full statutory names. Manually searching for these connections across regulatory materials is repetitive and time-consuming. The AI is intended to simplify this process by quickly generating a clear mapping between alphabetical regulation names and their corresponding statutory titles, improving speed and accuracy during compliance reviews. | The AI enhances staff efficiency by enabling faster access to accurate regulatory identifiers, reducing time spent on basic regulatory lookups. This supports the agency’s mission by improving consistency and reducing errors in examinations, training materials, and supervisory communications. The public benefits from more precise and efficient regulatory oversight, contributing to stronger consumer protections. | The AI will generate a quick-reference chart that pairs each regulation’s alphabetical designation with its full statutory or common regulatory name. Outputs may also include brief descriptions or links to source references. These materials serve as a convenient, standardized tool for examiners and staff, helping them navigate regulatory terminology more quickly and accurately. | The AI will generate a quick-reference chart that pairs each regulation’s alphabetical designation with its full statutory or common regulatory name. Outputs may also include brief descriptions or links to source references. These materials serve as a convenient, standardized tool for examiners and staff, helping them navigate regulatory terminology more quickly and accurately. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 10 | Newsletter Content Support | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | This draft-newsletter use case simply prepares informational content and plays no role in decisions that could affect rights, benefits. | Generative AI | Creating newsletter content requires staff to synthesize updates, draft clear narratives, and format information for broad audiences. This process can be time-consuming, especially when staff must balance writing tasks with other operational responsibilities. The AI is intended to ease this burden by producing initial draft content that captures key messages and themes, allowing staff to focus on refinement rather than starting from scratch. | The AI increases efficiency by accelerating the creation of consistent, well-structured newsletter drafts, helping teams communicate more regularly and effectively. This supports the agency’s mission by improving internal and external information sharing, reducing administrative workload, and ensuring timely dissemination of updates. For the public or broader Treasury stakeholders, clearer and more consistent communication enhances transparency and understanding of agency activities. | The AI will generate draft newsletter content based on provided inputs such as topics, summaries, data points, or key messages. Outputs may include article drafts, section summaries, headline suggestions, and formatted text ready for staff review. The drafts serve as a starting point, with human staff responsible for editing, validating, and approving the final publication. | The AI will generate draft newsletter content based on provided inputs such as topics, summaries, data points, or key messages. Outputs may include article drafts, section summaries, headline suggestions, and formatted text ready for staff review. The drafts serve as a starting point, with human staff responsible for editing, validating, and approving the final publication. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 11 | Meeting Minute Summarization | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | This use case produces summaries for informational purposes and does not generate decisions that affect rights, access to services, safety, or any other outcomes defined in the high-impact criteria. | Generative AI | Ability to synthesize information based on extensive discussions, action items, and decisions that must be captured accurately in meeting minutes. Manual compilation of these minutes is time-consuming, varies in quality, and can distract staff from active participation. The AI is intended to streamline this process by producing clear, organized summaries of meeting discussions, reducing the burden on staff and improving consistency in how meeting outcomes are documented. | The AI enhances efficiency by providing timely, reliable summaries of meeting minutes, helping staff quickly understand key decisions, responsibilities, and next steps. This supports the agency’s mission by improving coordination, ensuring accurate communication of supervisory-relevant information, and enabling more timely follow-through on issues. Indirectly, the public benefits from stronger oversight and more effective execution of supervisory responsibilities. | The AI will generate structured, concise summaries of meeting minutes, capturing major discussion points, decisions made, action items, and assigned responsibilities. Outputs may include narrative summaries, bullet-point action lists, and categorized topics for ease of review. These materials serve as a support tool for examiners and staff, with final validation and interpretation remaining the responsibility of agency personnel. | The AI will generate structured, concise summaries of meeting minutes, capturing major discussion points, decisions made, action items, and assigned responsibilities. Outputs may include narrative summaries, bullet-point action lists, and categorized topics for ease of review. These materials serve as a support tool for examiners and staff, with final validation and interpretation remaining the responsibility of agency personnel. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 12 | Visualization Alt-Text Generation | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The chart- and figure-summarization tool only provides descriptive summaries of visual data for user convenience and does not generate outputs that drive legal, regulatory, or service-impacting decisions. Because humans fully interpret and apply the information, and the tool’s output does not affect rights, benefits, or access to government services, it does not qualify as a High-Impact AI use case. | Generative AI | Charts, figures, and tables often lack descriptive alt-text, which is essential for accessibility and compliance with federal standards. Creating accurate alt-text manually is time-consuming and requires staff to interpret visual information and translate it into clear, concise narrative descriptions. The AI is intended to automate this process by analyzing visual elements and generating draft alt-text, reducing manual workload and improving consistency in accessibility documentation. | The AI increases efficiency and ensures more consistent compliance with accessibility requirements, supporting the agency’s mission to provide inclusive and usable information to all stakeholders. Staff benefit from reduced time spent drafting alt-text, enabling them to focus on higher-value analytical and communication tasks. The general public—particularly individuals relying on assistive technologies—benefits from clearer, more accessible descriptions of charts and data visualizations. | The AI will generate draft alt-text descriptions for charts, figures, and tables, summarizing key visual elements such as trends, comparisons, labels, and notable data points. Outputs may include short descriptive sentences or structured alt-text templates tailored to the type of visual. These drafts serve as a starting point for staff, who remain responsible for reviewing and finalizing the content to ensure accuracy and compliance. | The AI will generate draft alt-text descriptions for charts, figures, and tables, summarizing key visual elements such as trends, comparisons, labels, and notable data points. Outputs may include short descriptive sentences or structured alt-text templates tailored to the type of visual. These drafts serve as a starting point for staff, who remain responsible for reviewing and finalizing the content to ensure accuracy and compliance. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 13 | Handbook Highlights | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | This use case only provides reference or clarification support and does not produce decisions that affect rights, access to services, safety. | Generative AI | The Comptroller’s Handbook contains extensive, detailed guidance that examiners must review and interpret to support supervisory activities. Manually navigating and summarizing this material is time-consuming and can delay exam preparation or issue analysis. The AI is intended to streamline this process by quickly summarizing relevant sections of the handbook, enabling staff to more efficiently access key concepts, expectations, and procedural guidance. | The AI improves examiner efficiency by reducing time spent reading and synthesizing lengthy handbook materials, allowing staff to focus on deeper analysis and supervisory judgment. This supports the agency’s mission by promoting more consistent application of asset quality standards and strengthening the timeliness and effectiveness of examinations. The public benefits indirectly through stronger, more consistent oversight of asset quality risks and improved safety and soundness of financial institutions. | The AI will generate concise summaries of selected Handbook sections, highlighting major concepts, supervisory expectations, risk factors, and procedural steps. Outputs may include narrative summaries, bullet points, or structured outlines designed to support examiner orientation and prep work. These summaries act as decision-support materials, while final interpretations and supervisory decisions remain with agency personnel. | The AI will generate concise summaries of selected Handbook sections, highlighting major concepts, supervisory expectations, risk factors, and procedural steps. Outputs may include narrative summaries, bullet points, or structured outlines designed to support examiner orientation and prep work. These summaries act as decision-support materials, while final interpretations and supervisory decisions remain with agency personnel. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 15 | Training Content Generation | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | This use case supports training and learning activities and does not make decisions that affect rights, access to services, or other outcomes listed in the high-impact criteria. | Generative AI | Developing training materials for supervisory topics requires subject-matter expertise and significant staff time to research concepts, draft explanations, create examples, and format content for instructional use. This manual process can slow down the production of new or updated training modules. The AI is intended to automate portions of content creation by generating draft educational materials, helping training teams produce high-quality instructional content more efficiently. | The AI improves the speed and consistency of training content development, enabling staff to more quickly access updated and comprehensive learning materials. This strengthens the agency’s mission by enhancing examiner competency, improving knowledge transfer, and supporting more effective supervision. Ultimately, the public benefits from better-trained examiners who can more effectively identify and address risks within financial institutions. | The AI will generate draft training materials such as lesson summaries, topic explanations, examples, practice questions, and structured learning modules aligned to credit training objectives. These outputs serve as a starting point for training teams, who review, edit, and validate all content to ensure accuracy, clarity, and alignment with agency standards. | The AI will generate draft training materials such as lesson summaries, topic explanations, examples, practice questions, and structured learning modules aligned to credit training objectives. These outputs serve as a starting point for training teams, who review, edit, and validate all content to ensure accuracy, clarity, and alignment with agency standards. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 17 | Reporting Requirement Analysis | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | AI assists with reviewing and interpreting security reports but does not generate binding decisions or actions that impact individuals’ rights, benefits, or access to critical services. | Generative AI | Reviewing lengthy, technical, and highly variable reports manually to ensure completeness, accuracy, and adherence to regulatory requirements is time-consuming and resource intensive. The AI is intended to streamline this process by analyzing the content for required elements, identifying gaps, and highlighting areas that may not meet reporting standards. | The AI enhances efficiency and consistency, enabling staff to focus on higher-value evaluation and supervisory judgment. This supports the agency’s mission by improving the timeliness and quality of oversight related to information security and consumer data protection. The public benefits from stronger, more responsive supervision that promotes better safeguarding of sensitive customer information within financial institutions. | The AI will produce structured analyses of each report, including summaries of key findings, identification of missing or incomplete elements, and observations tied to compliance requirements. Outputs may include checklists, gap analyses, and synthesized policy-relevant insights that assist staff in determining whether a report meets regulatory expectations. Final review and regulatory conclusions remain fully with human examiners. | The AI will produce structured analyses of each report, including summaries of key findings, identification of missing or incomplete elements, and observations tied to compliance requirements. Outputs may include checklists, gap analyses, and synthesized policy-relevant insights that assist staff in determining whether a report meets regulatory expectations. Final review and regulatory conclusions remain fully with human examiners. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 21 | Software Code Generation | Pilot - The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | Not high-impact | Not high-impact | The code generation tool assists developers by producing draft code snippets that are fully reviewed, tested, and validated by human engineers before use, ensuring it does not autonomously influence systems or services. Because its output does not make decisions affecting rights, benefits, access, or critical government functions, it does not meet the threshold for a High-Impact AI use case. | Generative AI | Developing software code and producing the associated documentation requires significant time and technical expertise. Routine coding tasks, boilerplate creation, and documentation drafting can slow down development cycles and divert staff from higher-value design and analysis work. The AI is intended to assist by generating code snippets, templates, and initial documentation drafts, reducing manual effort and accelerating development processes. | The AI increases development efficiency by automating repetitive coding tasks and improving documentation consistency. This enables technical staff to focus more on systems design, security review, and integration work, supporting the agency’s mission through faster delivery of high-quality technical solutions. The public benefits indirectly through enhanced operational efficiency, improved reliability of supporting systems, and reduced development backlogs that contribute to more effective regulatory and supervisory activities. | The AI will generate software code aligned to predefined requirements, along with code comments, technical documentation, and usage explanations. Outputs may include function templates, structured code blocks, API documentation, configuration examples, or troubleshooting guidance. These materials serve as development aids, with final validation, testing, and implementation remaining the responsibility of agency technical staff. | 07/01/2025 | Developed in-house | No | The AI will generate software code aligned to predefined requirements, along with code comments, technical documentation, and usage explanations. Outputs may include function templates, structured code blocks, API documentation, configuration examples, or troubleshooting guidance. These materials serve as development aids, with final validation, testing, and implementation remaining the responsibility of agency technical staff. | No agency data is used to train the model | No | None of the Above | Yes | |||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 22 | OCC Writing Style Adherence | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | This use case only checks examiner writing against style guidelines and does not produce decisions that affect rights, access to services or safety. | Generative AI | Examiners must ensure that written supervisory products consistently adhere to the OCC Writing Style Manual before submitting them to their EICs for review. Manually checking documents for clarity, tone, structure, grammar, and style compliance can be time-consuming and may lead to inconsistencies across teams. The AI is intended to provide a first-pass review that identifies deviations from writing guidelines and highlights areas needing refinement, reducing the burden on staff. | The AI improves the quality and consistency of written supervisory communications by helping examiners align their drafts with OCC writing standards before formal review. This supports the agency’s mission by promoting clearer, more professional, and more effective supervisory messaging. The public benefits indirectly from improved clarity and coherence in supervisory documents, which strengthen the transparency and reliability of regulatory communications. | The AI will produce feedback on examiner-written materials, including suggested revisions for grammar, clarity, organization, conciseness, and alignment with OCC Writing Style Manual expectations. Outputs may include highlighted text, recommended rephrasing, structural guidance, and style compliance notes. These suggestions serve as decision-support tools for examiners, while all final writing decisions and supervisory judgments remain with OCC staff. | The AI will produce feedback on examiner-written materials, including suggested revisions for grammar, clarity, organization, conciseness, and alignment with OCC Writing Style Manual expectations. Outputs may include highlighted text, recommended rephrasing, structural guidance, and style compliance notes. These suggestions serve as decision-support tools for examiners, while all final writing decisions and supervisory judgments remain with OCC staff. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 23 | Policy Development Editorial Assistant | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | This use case is used for drafting policies and procedures manuals (PPMs) and spotting inconsistencies, without producing decisions that affect rights, access to services or safety. | Generative AI | Maintaining and updating the Policies and Procedures Manual (PPM) and Office- or Management-Level Policies and Procedures (OMPPs) requires staff to manually identify where changes are needed, ensure alignment across documents, and verify that no conflicts or duplications exist. This process is time-consuming and prone to oversight, leading to inconsistent guidance or outdated content. The AI is intended to streamline this workflow by detecting conflicting or duplicative information, identifying documents impacted by policy changes, and generating draft PPMs or updates to reduce manual errors and accelerate the policy development cycle. | The AI improves policy accuracy, consistency, and timeliness by helping staff quickly identify needed updates and eliminating conflicting or duplicative language across PPMs and OMPPs. This strengthens the agency’s mission by enhancing internal governance, ensuring clearer policy guidance, and reducing administrative burden on staff. The public benefits indirectly from more reliable internal policy frameworks that support efficient and effective regulatory and supervisory operations. | The AI will generate draft revisions to Policies and Procedures Manuals (PPMs) and Office- or Management-Level Policies and Procedures (OMPPs), produce newly drafted PPM sections based on provided specifications, and create reports highlighting conflicting, duplicative, or outdated policy elements. Outputs may include recommended edits, highlighted alignment issues, and structured draft policies. These serve as decision-support tools for policy teams, who maintain full responsibility for validating and approving all final documents. | The AI will generate draft revisions to Policies and Procedures Manuals (PPMs) and Office- or Management-Level Policies and Procedures (OMPPs), produce newly drafted PPM sections based on provided specifications, and create reports highlighting conflicting, duplicative, or outdated policy elements. Outputs may include recommended edits, highlighted alignment issues, and structured draft policies. These serve as decision-support tools for policy teams, who maintain full responsibility for validating and approving all final documents. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 24 | Regulatory Rule Analysis | Pre-Deployment - The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | Not high-impact | Not high-impact | This use case focuses on summarizing or interpreting regulatory material. | Generative AI | Comment letters on major rulemakings can be lengthy and numerous, making it difficult for staff to quickly understand the range of viewpoints and the main arguments presented. The AI helps by reviewing these submissions and distilling them into clear summaries, saving time and reducing the burden of manually going through each document. | By providing a faster way to understand feedback on the proposed rule, the AI supports more informed internal discussions and policy analysis. This helps ensure the agency remains aligned with evolving regulatory perspectives, ultimately contributing to stronger and more effective supervision practices that benefit the public and the financial system. | The system delivers concise summaries of comment letters, capturing key themes, concerns, recommendations, and areas of consensus or disagreement. These outputs give staff a clearer picture of the issues raised without needing to read each letter in full. | The system delivers concise summaries of comment letters, capturing key themes, concerns, recommendations, and areas of consensus or disagreement. These outputs give staff a clearer picture of the issues raised without needing to read each letter in full. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 25 | Public Information Analysis | Pre-Deployment - The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | Not high-impact | Not high-impact | This use case summarizes or analyzes publicly available financial discussions and does not produce decisions that impact rights, access to services or safety. | Generative AI | Review of transcripts can be time-consuming, especially when trying to compare themes, identify shifts in tone, or spot notable changes. The AI helps streamline this work by quickly analyzing the transcripts and pulling together the main similarities, differences, and key insights that normally require extensive manual review. | With faster access to clear, consolidated insights from transcripts examiners and analysts can more effectively monitor trends, emerging risks, management perspectives. This strengthens supervisory awareness and supports more timely oversight, which in turn contributes to a safer banking environment that benefits the broader public. | The system generates summaries of individual transcripts, identifies patterns and highlights noteworthy variations, and produces a consistent executive-level overview. These outputs give users a clearer view of what banks are reporting and how those messages evolve over time. | The system generates summaries of individual transcripts, identifies patterns and highlights noteworthy variations, and produces a consistent executive-level overview. These outputs give users a clearer view of what banks are reporting and how those messages evolve over time. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 26 | Narrative Summary and Thematic Analysis | Pre-Deployment - The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | This use case provides informational or administrative support. | Generative AI | The AI helps cut down the time spent reading through lengthy reports. These documents can contain a lot of detail, and staff often need a quick understanding of the main points. The AI provides an easier way to get to the core issues without manually working through the full report. | By giving teams faster access to the key concerns and trends, the AI supports more informed supervisory planning and internal coordination. This strengthens the agency’s ability to stay ahead of potential risks in the banking system, which ultimately contributes to better protection of consumers and the public. | The system generates a clear, readable summary of the report, calling out major themes, noteworthy issues, and any items that may need follow-up. These outputs help staff quickly understand what matters most. | The system generates a clear, readable summary of the report, calling out major themes, noteworthy issues, and any items that may need follow-up. These outputs help staff quickly understand what matters most. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 30 | Requirements Elaboration Tool | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | This tool assists in the refinement of requirements and further detailing business necessities. | Generative AI | Teams often start with high-level business needs that require additional clarification before work can move forward. The AI helps break down these broad statements into more detailed and precise requirements, reducing back-and-forth discussions and helping ensure everyone shares the same understanding of what needs to be delivered. | More clearly defined requirements lead to smoother project execution and fewer misunderstandings, which supports timely delivery of technology and process improvements. These improvements strengthen internal operations and ultimately help the agency carry out its supervisory mission more effectively, yielding better outcomes for the public and the financial system. | The system generates refined requirement statements, clarifies assumptions, identifies missing details, and organizes business needs into structured, actionable components. These outputs give project teams a clearer blueprint to work from. | The system generates refined requirement statements, clarifies assumptions, identifies missing details, and organizes business needs into structured, actionable components. These outputs give project teams a clearer blueprint to work from. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 31 | AI Use Case Review and Evaluation Workflow | Pilot - The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | Not high-impact | Not high-impact | This tool provides a structured assessment across five key dimensions—Bias, Ethics, Business Value, Risk, and Technical Excellence—to support informed value discussions for new AI services | Agentic AI | Enable a structured assessment across five key dimensions—Bias, Ethics, Business Value, Risk, and Technical Excellence—to support informed value discussions for new AI services, including both Commercial off the Shelf (COTS) solutions and custom-developed capabilities | This assessment framework provides a consistent, transparent method for evaluating AI solutions, ensuring that key considerations such as Bias, Ethics, Risk, and Technical Excellence are rigorously examined. It supports more informed decision-making, strengthens governance, and helps align AI investments with organizational value and compliance expectations. | The AI Suitability Plan delivers a comprehensive assessment package that reflects the perspective of each evaluation domain and functions as an assistant to the subject-matter experts in those areas. Its outputs offer structured, domain-specific analyses across Bias, Ethics, Business Value, Risk, and Technical Excellence, supporting SMEs with consistent, well-informed inputs to guide final decisions on AI suitability.” | Developed in-house | No | The AI Suitability Plan delivers a comprehensive assessment package that reflects the perspective of each evaluation domain and functions as an assistant to the subject-matter experts in those areas. Its outputs offer structured, domain-specific analyses across Bias, Ethics, Business Value, Risk, and Technical Excellence, supporting SMEs with consistent, well-informed inputs to guide final decisions on AI suitability.” | No agency data is used to train the model | No | None of the Above | Yes | ||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 32 | Framework Analysis | Pre-Deployment - The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | Not high-impact | Not high-impact | This use case summarizes and/or interprets regulatory content and does not generate decisions that affect rights, access to services or safety. | Generative AI | Feedback on NPR's can be extensive and highly technical, making it challenging for staff to quickly absorb the main points raised by commenters. The AI helps by reviewing this feedback and organizing it into clear themes and insights, reducing the time needed to work through large volumes of material. | By giving teams a faster understanding of the issues and concerns raised in comment letters, the AI supports more informed supervisory and policy discussions. This leads to better-quality analysis and decision-making related to capital rules, ultimately contributing to a stronger, more resilient banking system that benefits the public. | The system produces organized summaries of comment feedback, highlighting recurring themes, areas of disagreement, emerging concerns, and key technical points. These outputs make it easier for staff to see what commenters are focusing on and how viewpoints differ across the industry. | The system produces organized summaries of comment feedback, highlighting recurring themes, areas of disagreement, emerging concerns, and key technical points. These outputs make it easier for staff to see what commenters are focusing on and how viewpoints differ across the industry. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 33 | Survey Analysis | Pre-Deployment - The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | The Survey Analytics tool only aggregates and summarizes respondent feedback to support internal understanding and does not make decisions that affect individuals’ rights, benefits, or access to government services. Since all insights are interpreted and acted upon solely by humans, the tool does not qualify as a high-impact AI use case. | Generative AI | The AI is intended to address the difficulty of manually reviewing and interpreting large volumes of open-ended survey comments. Analysts often spend significant time reading through text responses to identify themes, concerns, or sentiment. The AI helps streamline this process by quickly mining, organizing, and highlighting meaningful insights from the survey data. | By accelerating the analysis of survey feedback, the AI enables the agency to better understand employee or stakeholder perspectives and identify areas needing attention. Faster, more consistent insight extraction supports improved decision-making and enhances internal operations, which strengthens the agency’s overall effectiveness in carrying out its supervisory mission—ultimately benefiting the public through more informed regulatory oversight. | The system generates summaries, themes, sentiment indicators, and highlighted insights drawn from open-text survey responses. It may also surface recurring topics or issues, helping users quickly grasp key patterns without manually reading every comment. | The system generates summaries, themes, sentiment indicators, and highlighted insights drawn from open-text survey responses. It may also surface recurring topics or issues, helping users quickly grasp key patterns without manually reading every comment. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 34 | Research Suitability | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | This use case assigns ratings to research papers and is a solely internal analytical task. | Generative AI | The AI is intended to reduce the manual workload involved in reviewing and scoring materials used for symposiums. Evaluators currently spend considerable time assessing relevance, quality, and technical rigor. The AI helps streamline this process by providing an initial, consistent rating that supports quicker filtering and prioritization. | By speeding up the presentation reference material selection process, the AI allows subject-matter experts to focus on deeper evaluation rather than preliminary screening. This improves efficiency and helps ensure that high-quality, policy-relevant research is highlighted for symposiums, ultimately supporting better-informed supervisory and regulatory perspectives that benefit the broader financial system and the public. | The system generates a rating or score for each material artifact, along with brief reasoning based on relevance, technical execution, and alignment with the symposium’s thematic criteria. These outputs act as a first-level filter to support reviewers in identifying papers that merit further consideration. | The system generates a rating or score for each material artifact, along with brief reasoning based on relevance, technical execution, and alignment with the symposium’s thematic criteria. These outputs act as a first-level filter to support reviewers in identifying papers that merit further consideration. | ||||||||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 35 | OCC.DocChat | Pilot - The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | Not high-impact | Not high-impact | This GenAI tool serves as the foundation of other use cases at the OCC. | Generative AI | GenAI solution for uploading documents to Personal and Group workspaces and facilitating chat interactions. | The AI is designed to solve the problem of managing and interacting with documents by enabling users to upload documents to Personal and Group workspaces and facilitating chat interactions based on the content of these documents. This streamlines document management and enhances collaborative communication. | The AI improves operational efficiency by providing a centralized platform for document management and collaboration, reducing the need for manual document handling. It facilitates better team communication and coordination through chat interactions related to specific documents, enhancing productivity and decision-making. For the general public, it ensures that agency staff can work more efficiently and effectively, leading to improved service delivery. | Developed with both contracting and in-house resources | Yes | The AI improves operational efficiency by providing a centralized platform for document management and collaboration, reducing the need for manual document handling. It facilitates better team communication and coordination through chat interactions related to specific documents, enhancing productivity and decision-making. For the general public, it ensures that agency staff can work more efficiently and effectively, leading to improved service delivery. | No agency data is used to train the model | No | None of the Above | Yes | ||||||||||||||
| Department Of The Treasury | Office of the Comptroller of the Currency (OCC) | 37 | Resource Assignment Optimization | Pre-Deployment - The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | Not high-impact | Not high-impact | This use case matches open resources with available job opportunities and does not make hiring or employment decisions that could impact rights or access to opportunities. | Generative AI | The AI is intended to reduce the manual effort involved in reviewing open positions and identifying available staff who may be a good fit. Currently, analysts and managers must sort through resource lists, skill profiles, and job requirements individually. The AI helps automate this matching process by applying predefined rules and logic, making it easier to identify potential matches quickly and consistently. | The AI improves workforce efficiency by helping the agency deploy staff more quickly and effectively, which supports timely supervision activities and strengthens overall operational readiness. By reducing administrative workload, analysts and managers can spend more time on mission-critical responsibilities. Ultimately, better resource alignment contributes to more effective oversight of the banking system, which benefits the public through a stable and well-regulated financial environment. | The AI outputs a set of recommended matches between available staff and open job roles, highlighting how well each resource aligns with the job requirements based on skills, experience, and other predefined criteria. These outputs serve as suggestions to support decision-making and help managers quickly identify candidates who may be suitable for specific assignments. | The AI outputs a set of recommended matches between available staff and open job roles, highlighting how well each resource aligns with the job requirements based on skills, experience, and other predefined criteria. These outputs serve as suggestions to support decision-making and help managers quickly identify candidates who may be suitable for specific assignments. | ||||||||||||||||||||
| Department Of The Treasury | Bureau of Engraving and Printing (BEP) | BEP-1 | AskBEP AI ChatBot | Pilot - The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | Not high-impact | Not high-impact | The AskBEP chatbot tool is designed to provide policy related information to the BEP community in a convenient and conversational format. Its function is strictly informational and does not involve processing or storing PII or sensitive data. The chatbot does not make determinations that affect individuals' civil rights, personal safety. financial status, or access to essential government sources. | Natural Language Processing (NLP) | The AskBEP chatbot addresses the challenge of quickly and consistently accessing policy information within the BEP community. Employees often need clear, accurate answers without having to search through lengthy documents or rely on staff availability. The AI solves this by providing an always-available, conversational tool that retrieves and explains relevant policy guidance in plain language. | The AskBEP chatbot improves accessibility, consistency, and efficiency by providing 24/7 conversational access to accurate policy information. The AskBEP chatbot also automates common inquiries while ensuring consistent, accessible policy information. | The AskBEP chatbot generates natural language responses to user queries in a conversational format. These outputs consist of policy information, guidance, or references to relevant resources. Responses are text-based, do not include PII, and are intended to be consistent, accurate, and user-friendly. | Developed with both contracting and in-house resources | Yes | The AskBEP chatbot generates natural language responses to user queries in a conversational format. These outputs consist of policy information, guidance, or references to relevant resources. Responses are text-based, do not include PII, and are intended to be consistent, accurate, and user-friendly. | The AskBEP chatbot does not use PII. Its responses are informed by internally approved policy documents and procedural guidance. Performance was evaluated using curated test queries, user surveys, and User Acceptance Testing to assess accuracy, clarity, and overall user experience. | No | None of the Above | Yes | ||||||||||||||
| Department Of The Treasury | Bureau of Engraving and Printing (BEP) | BEP-25 | Sub-Asset Maintenance Model | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | Not high-impact | Not high-impact | This use case is not high-impact because it focuses on a narrow operational task with limited enterprise-wide decision value or strategic impact. It proactively predict the mileage until the next maintenance will be needed on a sub asset machine. | Classical/Predictive Machine Learning | This model helps reduce downtime by forecasting when sub-asset machines will need maintenance, enabling proactive servicing and improved asset reliability. | SAMM model helps forecast when sub-asset machines will require maintenance, reducing unexpected breakdowns, improving asset reliability, and optimizing maintenance schedules to support mission continuity. | SAMM model predicts the mileage until the next maintenance will be needed on a sub asset machine | Developed with both contracting and in-house resources | Yes | SAMM model predicts the mileage until the next maintenance will be needed on a sub asset machine | Used [name(s) removed] Data | No | None of the Above | Yes | ||||||||||||||
| Department Of The Treasury | Bureau of Engraving and Printing (BEP) | BEP-41 | Ink and Paper Consumption Forecast (IPCF) | Pre-Deployment - The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | Not high-impact | Not high-impact | IPCF predictive analytics model focuses on forecasting operational inputs (ink and paper), which are supporting resources. While useful for inventory planning, the model does not directly influence strategic decisions or mission-critical outcomes. | Classical/Predictive Machine Learning | The Ink and Paper Consumption Forecast (IPCF) AI model is intended to predict: 1. Ink mileage predictions based on ink type and associated denom; 2. Ink consumption predictions based on number of sheets; 3. Paper consumption predictions given a time period; 4. Ink consumption predictions given a time period | The IPCF Predictive Analytics Model contributes to cost efficiency and supply chain resilience by predicting ink mileage based on ink type and denomination, as well as forecasting ink and paper consumption over time and by sheet volume. The model helps ensure accurate procurement, reduce material waste, and support uninterrupted production schedules. | The IPCF Predictive Analytics Model generates predictive outputs that support operational planning. The model produces: 1. Ink mileage estimates based on ink type and associated currency denomination; 2. Ink consumption forecasts tied to the number of sheets scheduled for production; 3. Paper consumption projections over defined time periods; 4. Ink usage predictions across time intervals to support procurement and inventory decisions" | The IPCF Predictive Analytics Model generates predictive outputs that support operational planning. The model produces: 1. Ink mileage estimates based on ink type and associated currency denomination; 2. Ink consumption forecasts tied to the number of sheets scheduled for production; 3. Paper consumption projections over defined time periods; 4. Ink usage predictions across time intervals to support procurement and inventory decisions" | ||||||||||||||||||||
| Department Of The Treasury | Bureau of Engraving and Printing (BEP) | BEP-42 | Inventory Replenishment Forecast | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | Not high-impact | Not high-impact | This use case is not high-impact because it offers limited strategic value and low decision leverage across the enterprise. It proactively predict whether vendors will deliver their items by the date promised in the contract. | Classical/Predictive Machine Learning | This model helps reduce supply chain disruptions by forecasting vendor performance and item stockouts. It enables proactive sourcing decisions, improves inventory planning, and strengthens operational readiness. | Inventory Replenishment Forecast model helps identify underperforming vendors and forecast item stockouts, enabling proactive sourcing, improved inventory planning, and reduced supply chain disruptions to support mission continuity. | Inventory Replenishment Forecast model proactively predicts whether vendors will deliver their items by the date promised in the contract | Developed with both contracting and in-house resources | Yes | Inventory Replenishment Forecast model proactively predicts whether vendors will deliver their items by the date promised in the contract | Used [name(s) removed] Data | No | None of the Above | Yes | ||||||||||||||
| Department Of The Treasury | Bureau of Engraving and Printing (BEP) | BEP-67 | IDV Model | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | Not high-impact | Not high-impact | This use case is not high-impact because it centers on long-range forecasting with uncertain accuracy and limited influence on near-term procurement decisions or enterprise outcomes. It predicts which items will require IDVs for purchasing over the next 5 years by tier level will enable better utilization of IDVs overall and enable OSCM to proactively receive the necessary funding. | Classical/Predictive Machine Learning | This model helps forecast long-term inventory turnover to improve IDV utilization and support proactive funding requests. | IDV model helps forecast long-term inventory turnover, enabling smarter IDV utilization and proactive funding alignment to support sustained mission readiness. | IDV model predicts which items will require IDVs for purchasing over the next 5 years by tier level will enable better utilization of IDVs overall and enable OSCM to proactively receive the necessary funding. | 04/02/2025 | Developed with both contracting and in-house resources | Yes | IDV model predicts which items will require IDVs for purchasing over the next 5 years by tier level will enable better utilization of IDVs overall and enable OSCM to proactively receive the necessary funding. | Used [name(s) removed] Data | No | None of the Above | Yes | |||||||||||||
| Department Of The Treasury | Technology Common Services Center (TCSC) | DO/TCSC/EDM-1 | GenAI for Data Platform Operations | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The AI system's outcome does not affect civil rights, safety, or access to critical services, therefore, it doesn't meet the criteria for a high-impact AI use case. | Generative AI | Helps developers build applications by providing scalable access to large language models for tasks such as natural language processing, code generation, and automation. It integrates with other services to allow for enterprise-grade deployment of AI-powered solutions. | AI solution allow EDM to enhance services by automating communication and data analysis, and improving decision-making. It e reduces administrative burden and supports transparency by summarizing complex information. | The outputs are AI-generated responses based on user input, tailored to the task and model used. These outputs can take many forms depending on the endpoint and use case for the EDM data products. | The outputs are AI-generated responses based on user input, tailored to the task and model used. These outputs can take many forms depending on the endpoint and use case for the EDM data products. | ||||||||||||||||||||
| Department Of The Treasury | United States Mint (USM) | MINT-01 | AI-Enabled Automated Visual Inspection for Mint Product Quality Assurance in Production Lines | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The AI system's outcome does not affect civil rights, safety, or access to critical services, therefore, it doesn't meet the criteria for a high-impact AI use case. | Computer Vision | The system is intended to automate the visual inspection of product (e.g., coins) on production lines to identify defects, inconsistencies, or anomalies that affect product quality. Currently, operators leverage systems that use comparable references to identify differences and, in some cases, conduct manual, visual quality checks using traditional inspection tools, which are labor-intensive, subject to human variability, and limited in speed. The AI solution aims to address these challenges by providing consistent, high-speed, high-accuracy inspection capabilities that can operate continuously across multiple shifts. | Implementation of automated visual inspection is expected to improve production efficiency, reduce operator workload, enhance coin quality, and decrease waste due to undetected defects. By reducing the incidence of defective products entering circulation or collector channels, the Mint intends to strengthen public trust and maintain brand integrity. Increased throughput and more stable quality control processes also support mission-critical delivery timelines and ensure the Mint can meet the nation’s coinage demand reliably. | The system shall output real-time assessments of coin quality, including defect classifications, visual indicators of anomaly location, severity rankings, and confidence scores. Automated pass/fail determinations are to be generated at production speed, along with dashboards and reports summarizing defect rates, trends, and production line performance. | The system shall output real-time assessments of coin quality, including defect classifications, visual indicators of anomaly location, severity rankings, and confidence scores. Automated pass/fail determinations are to be generated at production speed, along with dashboards and reports summarizing defect rates, trends, and production line performance. | Because this capability is still in the pre-deployment phase, no Mint-specific datasets have been used to train or fine-tune any models. Commercial off-the-shelf (COTS) solutions under consideration are generally trained on vendor-developed datasets and include high-resolution images of Mint products and defect conditions. If pursued, the Mint would evaluate the solution using internally generated sample images of finished coins, die strikes, and known defect types to validate performance and ensure accurate detection against Mint production standards. No operational Mint production data is currently used for model training. | No | Potential impacts include changes to workforce roles, increased dependence on automated inspection judgment, and the possibility of false positives or false negatives leading to unnecessary waste or undetected defects. There may also be impacts related to system integration, production-line pacing, and secure handling of high-resolution imagery from manufacturing operations. | |||||||||||||||||
| Department Of The Treasury | United States Mint (USM) | MINT-02 | AI-Driven Predictive Maintenance for Mint Manufacturing Equipment and Production Assets | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The AI system's outcome does not affect civil rights, safety, or access to critical services, therefore, it doesn't meet the criteria for a high-impact AI use case. | Classical/Predictive Machine Learning | The system is intended to identify patterns in equipment performance and lifecycle data to predict mechanical failures before they occur. Currently, maintenance relies on fixed schedules or reactive responses after failures, resulting in unplanned downtime, potential safety issues, and increased maintenance costs. The solution shall analyze machine data to anticipate component degradation and recommend optimal maintenance intervals. | Predictive maintenance capabilities will improve equipment uptime, support safer manufacturing operations, extend the usable life of key assets, and reduce costs associated with emergency repairs and production stoppages. Increased operational reliability directly supports the Mint's mission to deliver circulating and numismatic products on schedule for the nation and for collectors. Better maintenance forecasting also supports resource optimization and reduces waste. | The system shall produce recurring reports and alerts identifying equipment health status, predicted failure timelines, recommended maintenance actions, and confidence levels for each prediction. Outputs may include trend analyses, parts lifecycle projections, and prioritized maintenance schedules to support planning and decision-making. | The system shall produce recurring reports and alerts identifying equipment health status, predicted failure timelines, recommended maintenance actions, and confidence levels for each prediction. Outputs may include trend analyses, parts lifecycle projections, and prioritized maintenance schedules to support planning and decision-making. | At this stage, the Mint has not supplied any datasets for model training or fine-tuning. Predictive maintenance tools under evaluation typically rely on general industrial equipment datasets curated by the vendor, such as vibration patterns, temperature readings, cycle counts, and sensor-based performance indicators. If adopted, the Mint would evaluate the system using equipment telemetry, historical maintenance logs, fault reports, and vendor-provided lifecycle guidance to assess accuracy and applicability to Mint manufacturing assets. No Mint operational data is presently used in training. | No | Potential impacts include changes to maintenance scheduling practices, operational reliance on data-driven forecasts, and the risk of inaccurate predictions leading to unnecessary downtime or premature replacement of parts. There may also be impacts on workforce planning, supply inventory management, and integration with existing maintenance management systems. | |||||||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2024-01779 | eDiscovery Software | Pre-Deployment - The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | Contextual Research (Exploring policies, regulations, trends, and strategic topics) | OCC is investigating the potential use of eDiscovery software. Reaching out to other Bureaus to get a preliminary interest/requirements list put together. | Discovery of data based on legal requests | Discovery of data based on legal requests | ||||||||||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2024-01872 | Large Language Model (LLM) Chatbot for Proof of Concept | Pilot - The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | Code Assistance - Generating, debugging, and documenting code | Test PaaS to assist developers. The LLM will generate comments and documentation alongside code snippets, promoting better code understanding and maintainability. The LLM works with COBOL, JAVA, Angular, and other programming languages, making it versatile for multi-language projects. The LLM has the ability to catch common mistakes, improving code quality and reducing debugging time, thereby accelerating the overall coding process. | Code comments, documentation, improvement suggestions | 06/10/2024 | Developed with both contracting and in-house resources | No | Code comments, documentation, improvement suggestions | N/A. Evaluating tools only at this stage | No | None of the Above | No | |||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2024-01873 | Large Language Model (LLM) Coding for Proof of Concept | Pilot - The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | Searching for agency information using a knowledge retrieval system. Contextual Research (Exploring policies, regulations, trends, and strategic topics). | Test standing up an LLM and train it using documentation such as PDF files and web pages to be knowledgeable of internal systems. Subsequently use the LLM to power a chat bot that will be used by internal customers. Test to see if the chatbot can provide fast and consistent answers to customer queries. | FAQ handling and Case Deflection | 06/10/2024 | Developed in-house | No | FAQ handling and Case Deflection | Internal documentation such as PDF files and web pages to be knowledgeable of internal systems | No | None of the Above | Yes | |||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2024-01948 | USAspending Natural Language Search Initiative | Pilot - The use case has been deployed in a limited test or pilot capacity. | Procurement & Financial Management | Pilot | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Natural Language Processing (NLP) | Contextual Research (Exploring policies, regulations, trends, and strategic topics) | Pilot a free text search function for new users to access insights from USAspending.gov in a more intuitive way. This effort will also accelerate technical solutioning and future development, and aid in PO decision-making/prioritization. This pilot will use LLM best practices and technology to accept natural language searches from users. Test Use of Natural Language Cloud Service to enhance User Search Experience in USA Spending | Enhanced search and intuitive experience for users | Developed in-house | No | Enhanced search and intuitive experience for users | N/A. Evaluating tools only at this stage | No | None of the Above | No | ||||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2024-01953 | GenAI Productivity Tool Pilot | Pilot - The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | Transcribing, summarizing, or other efforts that improve the accessibility of a virtual meeting or interview using AI. (Meeting Summarization and Recap - Generating notes, action items, and summaries from meetings). Generating first drafts of documents, briefing, or communication materials using AI. Content Generation & Refinement (Drafting, rewriting, and improving documents, emails, presentations, and plans). Summarizing the key points of a lengthy report using AI. (Content Summarization - Summarizing emails, chats, meetings, and documents). Code Assistance - Generating, debugging, and documenting code. Data Analysis and Manipulation (Excel formulas, macros, forecasting, and SQL-based data work). Contextual Research (Exploring policies, regulations, trends, and strategic topics). Knowledge Retrieval (Locating known information and presenting or analyzing the information). Document Comparison and Alignment (Comparing versions of documents for consistency and alignment). Email and Task Automation (Automating rules, reminders, and scheduling). How-To Guidance (Step-by-step instructions for tools and workflows). Editing images, videos, or other public affairs materials using AI. Scheduling and managing social media posts using AI. | Pilot AI productivity solution in Fiscal Service tenant to determine business value | Multiple depending on application used or general browser based use. | 09/05/2024 | Purchased from a vendor | No | Multiple depending on application used or general browser based use. | N/A. Evaluating tools only at this stage | No | None of the Above | No | |||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2024-01972 | FRB AI Code Translation POC | Pilot - The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | Code Assistance - Generating, debugging, and documenting code | This POC will explore the remediation of legacy code by leveraging generative AI to accelerate migration to a more modern framework. The purpose of this POC is simply to evaluate the efficacy of gen AI for this purpose and any code generated would not be moved to production. AI-driven code translation accelerates the conversion of code from one language or framework to another, including updating legacy systems to modern platforms. The use of generative AI in technical debt remediation offers substantial operational efficiencies, cost savings, and strategic benefits, quantifiable through reduced labor hours, lower migration costs, and faster time-to-market for modernized applications. These advantages position teams to better leverage modern technologies and align with the strategic goals of agility and innovation. | Remediation of Legacy code -- migrating to more modern framework | Developed in-house | No | Remediation of Legacy code -- migrating to more modern framework | N/A. Evaluating tools only at this stage | No | None of the Above | No | ||||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2024-02020 | CoCounsel tool used in the legal office | Pre-Deployment - The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | Contextual Research (Exploring policies, regulations, trends, and strategic topics. Generating first drafts of documents, briefing, or communication materials using AI. (Content Generation & Refinement - Drafting, rewriting, and improving documents, emails, presentations, and plans). Knowledge Retrieval (Locating known information and presenting or analyzing the information) | The legal office would like to use AI assistant, which will be used by a paralegal in the legal office. | Accelerate legal work with a comprehensive AI solution that helps complete research, document analysis, and drafting all in one place. | Accelerate legal work with a comprehensive AI solution that helps complete research, document analysis, and drafting all in one place. | ||||||||||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2024-02034 | Knowledge Retrieval Chatbot POC | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | Contextual Research (Exploring policies, regulations, trends, and strategic topics). Knowledge Retrieval (Locating known information and presenting or analyzing the information). | The customer would like to test the use of a chatbot that could access all of their relocation policies (FTR, DSSR, FAM, JTR, ARC Relocation Guide, etc.) in an effort to aid in the amount of time they are searching for stuff in the regulations. | Chatbot to search documents and other data | Developed in-house | No | Chatbot to search documents and other data | N/A. Evaluating tools only at this stage | No | None of the Above | No | ||||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2024-02035 | Open source Python library and models for NLP | Pilot - The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Natural Language Processing (NLP) | Text analytics- process, analyze, and understand text | DNP program needs to add a technology request to GEM for an open-source Python library and models from vendor. | Transforms raw text into a format ready for data analysis/ rich linguistic representation | Purchased from a vendor | No | Transforms raw text into a format ready for data analysis/ rich linguistic representation | N/A. Evaluating tools only at this stage | No | None of the Above | No | ||||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2025-02087 | AI-assisted Procurement POC | Pilot - The use case has been deployed in a limited test or pilot capacity. | Procurement & Financial Management | Pilot | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | Contextual Research (Exploring policies, regulations, trends, and strategic topics). Knowledge Retrieval (Locating known information and presenting or analyzing the information). | Gathering data for a Prism project. Expected outcomes is to verify the AI Tool is a useful aid in helping Procurement employees enhance identifying adding the correct FAR clauses needed for a procurement action. | AI assisted search results from documents and other data | 05/05/2025 | Purchased from a vendor | No | AI assisted search results from documents and other data | This is a vendor provided tool. We are not able to train the model. | No | None of the Above | No | |||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2025-02100 | FORMS AI | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | Data Analysis and Manipulation | DDM-Debt is looking to utilize Machine Learning/GenAI solution to production in their FORMS system. The ML/GenAI solution automates the classification and routing of documents to the appropriate business units and roles. | Convert PDFs to extract data fields that can be used to automate workflows | 02/11/2025 | Developed with both contracting and in-house resources | No | Convert PDFs to extract data fields that can be used to automate workflows | N/A. Evaluating tools only at this stage | No | None of the Above | No | |||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2025-02120 | Pilot a suite of Fiscal Service Gen AI solutions | Pilot - The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Classical/Predictive Machine Learning | Data Analysis and Manipulation | The Data Strategy Division is seeking to assess performance of AI alternatives to address business needs by doing comparative performance analysis with rigorous ROI testing using scientific testing frameworks (e.g., A/B, pre-post) to help direct decision-making around Artificial Intelligence (AI) adoption. | AI Tool selection | 03/03/2025 | Developed with both contracting and in-house resources | No | AI Tool selection | N/A. Evaluating tools only at this stage | No | None of the Above | No | |||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2025-02155 | Application to AI services integration POC | Pilot - The use case has been deployed in a limited test or pilot capacity. | Service Delivery | Pilot | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Other | AI Enablers (Enterprise solutions that enable AI to integrate into IT services infrastructures) | Identify a method to connect COTS product to AI service. The proposed initiative aims to develop an enterprise API endpoint for AI services, enabling seamless integration of custom applications and Commercial-Off-The-Shelf (COTS) products. | Use of an locally controlled LLM for COTS application | 07/08/2025 | Purchased from a vendor | No | Use of an locally controlled LLM for COTS application | Utilizing vendor provided models. Not training any models. | No | None of the Above | Yes | |||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2025-02182 | AI Application and Agent Management POC | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | AI Enablers (Enterprise solutions that enable AI to integrate into IT services infrastructures) | AI system is a comprehensive platform designed to help developers and IT administrators design, customize, and manage AI applications and agents. It provides a unified interface, SDK, and APIs to integrate data. | Determine usefulness of AI tool/service | Determine usefulness of AI tool/service | ||||||||||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2025-02186 | Research / Discovery on GenAI Product for IT | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | AI Enablers (Enterprise solutions that enable AI to integrate into IT services infrastructures) | Conducting initial research/discovery on AI product | Determine usefulness of AI tool/service | Determine usefulness of AI tool/service | ||||||||||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2025-02187 | Research / Discovery on GenAI Product for IT | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | AI Enablers (Enterprise solutions that enable AI to integrate into IT services infrastructures) | Conducting initial research/discovery on AI product | Determine usefulness of AI tool/service | Determine usefulness of AI tool/service | ||||||||||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2025-02188 | Research / Discovery on GenAI Product for IT | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | AI Enablers (Enterprise solutions that enable AI to integrate into IT services infrastructures) | Conducting initial research/discovery on AI product | Determine usefulness of AI tool/service | Determine usefulness of AI tool/service | ||||||||||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2025-02189 | Research / Discovery on GenAI Product for IT | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | AI Enablers (Enterprise solutions that enable AI to integrate into IT services infrastructures) | Conducting initial research/discovery on AI product | Determine usefulness of AI tool/service | Determine usefulness of AI tool/service | ||||||||||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2025-02190 | Research / Discovery on GenAI Product for HR | Pre-Deployment - The use case is in a development or acquisition status. | Human Resources | Pre-deployment | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | AI Enablers (Enterprise solutions that enable AI to integrate into IT services infrastructures) | Conducting initial research/discovery on AI product | Determine usefulness of AI tool/service | Determine usefulness of AI tool/service | ||||||||||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2025-02191 | Research / Discovery on GenAI Product for IT | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | AI Enablers (Enterprise solutions that enable AI to integrate into IT services infrastructures) | Conducting initial research/discovery on AI product | Determine usefulness of AI tool/service | Determine usefulness of AI tool/service | ||||||||||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2025-02280 | Local LLMs Evaluation | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | AI Enablers (Enterprise solutions that enable AI to integrate into IT services infrastructures) | An open-source tool that allows you to run large language models (LLMs) locally on your own computer. Can be used to run and experiment with various LLM's locally and gain greater control over data . | Local LLM evaluation | 08/05/2025 | Developed with both contracting and in-house resources | No | Local LLM evaluation | N/A. Evaluating tools only at this stage | No | None of the Above | Yes | |||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2025-02297 | GenAI Productivity Tool with Web Grounding Pilot | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | Contextual Research (Exploring policies, regulations, trends, and strategic topics). Knowledge Retrieval (Locating known information and presenting or analyzing the information). | Conduct a web grounding-enabled pilot. By default, web grounding is off in government environments; enabling will require an AI risk assessment. Without web grounding, AI product can only provide info through the summer of 2024. | Test the security and impact of web grounding AI tool | Purchased from a vendor | No | Test the security and impact of web grounding AI tool | N/A. Evaluating tools only at this stage | No | None of the Above | No | ||||||||||||||
| Department Of The Treasury | Bureau of the Fiscal Service (BFS) | REQ-2025-02310 | Establish localized Large Language Model (LLM) | Pilot - The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | Not high-impact | Not high-impact | The tool, once deployed, is only for productivity and does not impact any of the defined high-impact areas | Generative AI | AI Enablers (Enterprise solutions that enable AI to integrate into IT services infrastructures) | Consolidation of multiple research initiatives into an aligned approach to research an enterprise LLM that will be trained on approved internal documentation to provide accurate, consistent, and rapid responses to employee inquiries. This will improve service delivery, reduce case escalations, and free personnel to focus on complex, mission-critical work | AI-Powered Service Performance Metrics. Efficiency and Responsiveness: Track reductions in response times versus the pre-LLM baseline, first-contact resolution without escalation, and inquiry deflection handled entirely by the LLM. Accuracy and Consistency: Evaluate response accuracy through audits against official documentation, measure alignment with current Fiscal Service policies, and track reductions in incorrect or inconsistent responses compared to manual baselines. Security and Compliance: Ensure zero unauthorized access through effective role-based controls, conduct quarterly reviews for compliance with federal privacy, cybersecurity, and records management standards, and pass all required internal and external security audits. User Adoption and Satisfaction: Measure internal adoption rates, collect user satisfaction feedback through surveys or ratings, and assess training effectiveness using feedback and usage metrics. Operational Impact: Quantify staff hours reallocated from repetitive inquiries to higher-value work, track operational cost savings, and measure the breadth of internal systems and processes covered by the LLM’s knowledge base. | 09/10/2025 | Developed in-house | No | AI-Powered Service Performance Metrics. Efficiency and Responsiveness: Track reductions in response times versus the pre-LLM baseline, first-contact resolution without escalation, and inquiry deflection handled entirely by the LLM. Accuracy and Consistency: Evaluate response accuracy through audits against official documentation, measure alignment with current Fiscal Service policies, and track reductions in incorrect or inconsistent responses compared to manual baselines. Security and Compliance: Ensure zero unauthorized access through effective role-based controls, conduct quarterly reviews for compliance with federal privacy, cybersecurity, and records management standards, and pass all required internal and external security audits. User Adoption and Satisfaction: Measure internal adoption rates, collect user satisfaction feedback through surveys or ratings, and assess training effectiveness using feedback and usage metrics. Operational Impact: Quantify staff hours reallocated from repetitive inquiries to higher-value work, track operational cost savings, and measure the breadth of internal systems and processes covered by the LLM’s knowledge base. | N/A. Evaluating tools only at this stage | No | None of the Above | Yes | |||||||||||||
| Department Of The Treasury | General Counsel | OGC-01 | Regulatory Reform Tool | Pre-Deployment - The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | High-impact | High-impact | Generative AI | Prepares draft proposed and final regulations; reviews statutes for potential deregulatory actions | The AI improves operational efficiency, reduces manual burden, and enhances service delivery to the public. | The tool seeks to identify statutes that are not statutorily required or that are inconsistent with Looper Bright; generates draft proposed and final rules | The tool seeks to identify statutes that are not statutorily required or that are inconsistent with Looper Bright; generates draft proposed and final rules | The model was trained and evaluated on a mixture of licensed data, data created by human trainers, and publicly available data. For government or Treasury enterprise use, no agency-specific or Treasury data was used to train, fine-tune, or evaluate the model unless such data was explicitly provided and securely isolated within that enterprise environment. | No | https://www.dhs.gov/publication/dhsallpia-097-use-conditionally-approved-commercial-generative-artificial-intelligence | None of the Above | Yes | https://www.dhs.gov/publication/dhsallpia-097-use-conditionally-approved-commercial-generative-artificial-intelligence | |||||||||||||||
| Department Of The Treasury | Terrorism and Financial Intelligence (TFI) | TFI-1 | Public Chatbot | Pre-Deployment - The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | Not high-impact | Not high-impact | Does not affect civil rights, safety, or access to critical services, or non-public information. | Generative AI | Public data is hard to sift through and results in calls to the helpline. | Make public information more easily attainable and reduce demands for human call center | Aspirational, not yet funded. Chatbot interface that provides search results and summaries of public OFAC information, such as FAQs, executive orders, fact sheets, general licenses, statutes, and UNSCRs so that users of the public-facing OFAC website can find relevant information more easily. | Aspirational, not yet funded. Chatbot interface that provides search results and summaries of public OFAC information, such as FAQs, executive orders, fact sheets, general licenses, statutes, and UNSCRs so that users of the public-facing OFAC website can find relevant information more easily. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-01 | SBSE - Payments Topics Chatbot | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Natural Language Processing (NLP) | Provide self service capabilities to taxpayers seeking information about certain tax topics. | This chatbot provides means for the taxpayers to receive information without need to speak with an Agent. The benefits include reduced taxpayer burden as well as increased internal IRS operational efficiencies. | The NLP algorithm functions by classifying user inputs into predefined topics using advanced natural language processing techniques. It evaluates the input against a set of trained models and assigns a response based on a confidence score calculated by the algorithm. Only responses associated with these pre-established topics are provided, ensuring accuracy and consistency. The algorithm does not generate new or ad-hoc answers outside the defined topics, making it reliable for controlled environments where precision and adherence to predetermined content are critical. | 12/10/2021 | In-house & Contractor | Yes | The NLP algorithm functions by classifying user inputs into predefined topics using advanced natural language processing techniques. It evaluates the input against a set of trained models and assigns a response based on a confidence score calculated by the algorithm. Only responses associated with these pre-established topics are provided, ensuring accuracy and consistency. The algorithm does not generate new or ad-hoc answers outside the defined topics, making it reliable for controlled environments where precision and adherence to predetermined content are critical. | Learning set of utterances-to-intent maps. | No | None of the above | No | |||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-115 | Joint Committee on Taxation Review Research Aide | Pre-Deployment - The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | The Joint Committee on Taxation Review (JCTR) Research Aide is a generative AI (GenAI) system that uses Retrieval Augmented Generation (RAG) search to answer questions on JCTR policies and procedures that are provided in natural language. This tool has a use case specific user interface (UI) that uses the Certara platform via Application Programming Interface (API) to perform the Retrieval-Augmented Generation (RAG) search and generation. | Operational efficiency and reduced cost by providing the necessary information to agents to quickly fins answer to JCTR questions reducing the time and effort needed to research responses. | This system uses an embedding model to produce a vector representation of texts for contextual searching, and Large Language Model to generate a response to user inquiries. Outputs of the tool include the model generated response to a user's inquiry, as well as the text from relevant documents identified in the retrieval stage and used by the model in generating said response. | This system uses an embedding model to produce a vector representation of texts for contextual searching, and Large Language Model to generate a response to user inquiries. Outputs of the tool include the model generated response to a user's inquiry, as well as the text from relevant documents identified in the retrieval stage and used by the model in generating said response. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-117 | Local GenAI Pilot | Pilot - The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | Local Large Language Models (LLMs) will help in local no software/hardware (SW/HW) cost. LLMs will position IRS for sustainable AI services and AI tool integrations and mitigate many risks [Several Risks Mitigated Include: vendor lock-in, licenses shortages or over purchasing, SW costs or per Application Programming Interface (API) call costs, contract gaps and access loss with outsourcing, and data privacy policies and interagency agreements restricting data leaving the IRS (Auditing/Compliance Algorithm's Source Code)] | Reducing human workload by leveraging LLMs increasing development speed, automation, and operational efficiency. Ensuring alignment with internal Information Technology (IT) policies and security protocols by operating entirely within IRS infrastructure. Eliminates licensing costs reducing operational expenses. Maintains full control over sensitive data, enhancing data governance and compliance with federal privacy mandates. Reducing maintenance costs and technical debt due to legacy code and technology. Aging/departing workforce and lack of expertise with legacy systems increases risk of system failures and operational bottlenecks. Resource optimization, efficiency, and sustainable solutions required to modernize critical IRS operations and services | Contextually grounded responses derived from the given input prompt. Examples: Text Generation, Summarization, Code Completion, Unit Tests, Text Translation, Text-to-Text Transformation. | In-house | Yes | Contextually grounded responses derived from the given input prompt. Examples: Text Generation, Summarization, Code Completion, Unit Tests, Text Translation, Text-to-Text Transformation. | We have LLMs that are fully pre-trained. Each model is specifically trained on various use case categories like code translation, text summarization etc. We do not perform additional training for these models at this time. | No | None of the above | No | https://github.com/Mozilla-Ocho/llamafile.git | |||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-118 | Internal Revenue Manual Research Aid (IRMA) | Pilot - The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | The Internal Revenue Manual (IRM) Research Aide (IRMA) is a generative AI (GenAI) system that uses Retrieval Augmented Generation (RAG) search to answer questions on IRM policies and procedures that are asked in natural language. This tool has a custom user interface (UI) and utilizes a COTS platform via Application Programming Interface (API) to perform the RAG search and generate responses based on the full text of the IRM serving as a knowledge base. The generated responses also provide links back to the referenced sections of the IRM for further review. | Operational efficiency and reduced cost by providing the necessary information to taxpayer-facing customer service agents to quickly answer taxpayers' questions. This enables the agents to increase their capacity and help a larger number of taxpayers by reducing the time and effort needed to research responses. The public, in turn, benefits from a reduction in wait time to receive a response to their inquiries. | This system uses an embedding model to produce a vector representation of texts for contextual searching and a Large Language Model (LLM) to generate a response to user inquiries. Outputs of the tool include the model generated response to a user's inquiry, as well as the text and links to relevant references identified in the retrieval stage and used by the model to generate the response. | 04/01/2025 | In-house & Contractor | Yes | This system uses an embedding model to produce a vector representation of texts for contextual searching and a Large Language Model (LLM) to generate a response to user inquiries. Outputs of the tool include the model generated response to a user's inquiry, as well as the text and links to relevant references identified in the retrieval stage and used by the model to generate the response. | The model is trained and refined on the full text of the internal IRM, including the sections marked "Official Use Only (OUO)" that are redacted in the public version. The model is evaluated against a standard set of benchmarking questions whose responses have been provided and validated by subject matter experts (SMEs) with the relevant domain-knowledge. | No | None of the above | Yes | |||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-124 | Answering Employee Questions with Natural Language Processing in IRWorks | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Natural Language Processing (NLP) | The purpose of the Artificial Intelligence (AI) element of the IRWorks platform is to utilize natural language to respond to common user questions about IRWorks knowledge base (KB) articles and service catalogue items in the IRWorks Employee Center. This AI element is interpretive. At a high level, this is accomplished by using language models to train the agent to respond to user questions. The decision/judgement that this AI element uses is to search existing Knowledge Base (KBs)/service catalogue items, in conjunction with the user's defined access within IRWorks, to automate Tier 1 support by outputting (in natural language) answers and/or associated links that align to the customer's (natural language) query. Yes, humans are involved in reviewing the output - the AI element does not produce actionable items but produces information or links that are provided to the querying user for them to review and/or take manual actions. The population impacted by this decision is all IRWorks users, about 90K+ IRS employees. | This functionality allows better understanding of user questions to direct them to better information allowing for quicker resolution of IT and HR queries and issues. | Improve efficiency and optimize outputs for end users requesting services/information from within ticket / workflow management platform. | Vendor | Yes | Improve efficiency and optimize outputs for end users requesting services/information from within ticket / workflow management platform. | Knowledge Base Data for Information Technology Service Management (ITSM) including how to guides. IRS specific 'utterance' phrases that determine user intent. Vocabulary synonyms for words or phrases unique to IRS. | No | None of the above | No | ||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-125 | Form 990N Machine Learning Model | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Classical/Predictive Machine Learning | The goal of this model is to predict non-eligible Form 990N filers as part of a data driven approach to identify non-compliance within the population of exempt organizations (EOs) filling the Form 990N (Issue Control Number (ICN) 121004). | Form 990-N filings provide limited information to the IRS, making it difficult for Tax-Exempt Government Entities (TEGE) to detect non-compliance among that population of EOs. This use case provides improved ability to predict non-compliance. | The model output is an Excel workbook with 34 columns including Employer Identification Number (EIN) and a PREDICTION column. | In-house | Yes | The model output is an Excel workbook with 34 columns including Employer Identification Number (EIN) and a PREDICTION column. | 3 years of transactional data for each entity is used with a combination of Hyper-parameter tuning as well as feature engineering to generate a Random Forest classification model. Model performance was assessed by primarily looking at recall and Receiver Operating Characteristic (ROC) curves. | Yes | None of the above | Yes | ||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-127 | Volunteer Income Tax Assistance (VITA) Generative AI Chatbot Proof of Concept | Retired | Retired | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | ||||||||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-129 | Ask-CFO Research Aide | Pre-Deployment - The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | Ask-CFO is an extension of the Research Aide developed using a COTS platform originally designed for the Joint Committee on Taxation Research Aide (JCR) and Internal Revenue Manual Research Aid (IRMA) use cases. It is a Retrieval Augmented Generation (RAG) search on a number of Chief Financial Officer (CFO) specific documents. In addition to the existing research aide capability and user interface (UI), extensions will be developed to incorporate AI Agents that can perform actions such as document summary comparison over fiscal year, or extract information from tables. | This will help CFO users search and obtain information from their repository of relevant documents. | This system uses an embedding model to produce a vector representation of texts for contextual searching, and Large Language Model to generate a response to user inquiries. Outputs of the tool include the model generated response to a user's inquiry, as well as the text from relevant documents identified in the retrieval stage and used by the model in generating said response. | This system uses an embedding model to produce a vector representation of texts for contextual searching, and Large Language Model to generate a response to user inquiries. Outputs of the tool include the model generated response to a user's inquiry, as well as the text from relevant documents identified in the retrieval stage and used by the model in generating said response. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-140 | Sandbox for Large Language Model Research | Pre-Deployment - The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | This sandbox would enable IRS Criminal Investigation (CI) to research, test, and customize Large Language Model (LLM) AI tools to increase operational efficiencies within IRS-CI. This will positively benefit IRS-CI by operationalizing functional LLM tools for our agency, reducing the administrative burden and increasing access to a shared agency knowledge base. | The sandbox will benefit our agency by enabling IRS-CI to research, test, and customize LLM AI tools to increase operational efficiencies within IRS-CI. This will positively benefit IRS-CI by operationalizing functional LLM tools for our agency, reducing the administrative burden and increasing access to a shared agency knowledge base. | Use case dependent. | Use case dependent. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-143 | Digital Forensics Chatbot for IRS Employees | Retired | Retired | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | ||||||||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-145 | Generative AI for Form 1040X: Support Tool for Amended Returns (STAR) | Retired | Retired | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | ||||||||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-17 | DATA Act Bot for Procurement Data Matching | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Natural Language Processing (NLP) | AI is used to validate whether public contract spending reporting matches the information in PDF contract documents. The Digital Accountability and Transparency Act requires consistent and reliable contract spending to be reported to the public in the Federal Procurement Data System (FPDS). AI powered bot automates comparing FPDS data with contract documents to improve the quality of contract spending reporting. | Policymakers and the taxpaying public can benefit by receiving higher quality, more reliable contract spending data that has been vetted by AI powered data validation. This contract spending data is available to the public on USASpending.gov. | AI model validates that contract spending information reported to USASpending.gov matches contract documents. The system validates the consistency of contract metadata such as contract number, modification number, dollar amounts, contract work / place of performance location address, and contract dates. | In-house & Contractor | Yes | AI model validates that contract spending information reported to USASpending.gov matches contract documents. The system validates the consistency of contract metadata such as contract number, modification number, dollar amounts, contract work / place of performance location address, and contract dates. | Contract documents and data from the Federal Procurement Data System. | No | https://www.irs.gov/pub/irs-pia/cwsb-pia.pdf | None of the above | Yes | https://www.irs.gov/pub/irs-pia/cwsb-pia.pdf | ||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-24 | Taxpayer Services Chatbot | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Natural Language Processing (NLP) | This project implemented the Taxpayer Services chatbot, which provides self-service Frequently Asked Questions (FAQ) to taxpayers and international taxpayers regarding refunds, status of refunds, amended returns, Employer Identification Number (EIN), Earned Income Tax Credit (EITC), change of address, identity theft, extension to file topics. The Intent Engine, using Natural Language Processing (NLP) and AI algorithms classifies taxpayers' utterances (questions) into intents (requests). The answers provided to the taxpayers are not created by Generative AI. The responses of the chatbot and business logic is predetermined by content owners. This AI tool aims to classify/navigate to a correct predetermined response. | This chatbot provides means for the taxpayers to receive information without need to speak with an Agent. The benefits include reduced taxpayer burden as well as increased internal IRS operational efficiencies. | The NLP algorithm functions by classifying user inputs into predefined topics using advanced natural language processing techniques. It evaluates the input against a set of trained models and assigns a response based on a confidence score calculated by the algorithm. Only responses associated with these pre-established topics are provided, ensuring accuracy and consistency. The algorithm does not generate new or ad-hoc answers outside the defined topics, making it reliable for controlled environments where precision and adherence to predetermined content are critical. | In-house & Contractor | Yes | The NLP algorithm functions by classifying user inputs into predefined topics using advanced natural language processing techniques. It evaluates the input against a set of trained models and assigns a response based on a confidence score calculated by the algorithm. Only responses associated with these pre-established topics are provided, ensuring accuracy and consistency. The algorithm does not generate new or ad-hoc answers outside the defined topics, making it reliable for controlled environments where precision and adherence to predetermined content are critical. | Learning set of utterances-to-intent maps. | No | None of the above | No | ||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-28 | Machine Translation | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Classical/Predictive Machine Learning | To better facilitate and automate content translation efforts at the IRS, a cloud-based commercial off-the-shelf (COTS) solution is being leveraged for use by the Linguistic, Policy, Tools and Services (LPTS) Team. The Machine Translation (MT) application is being assessed and evaluated by the primary users, the Linguistic Policy, Tools and Services (LPTS) organization, through integration with their existing processes. The MT application focuses on translation for existing text and labels into Spanish with the ultimate goal of becoming an enterprise solution for a variety of non-English translations. | Machine translation is used to translate text and file-based content from and to English, Spanish, Chinese (Traditional, Simplified), Korean and Vietnamese used by MAS/LPTS to speed up responses to taxpayers. | The MT application focuses on translation for existing text and labels into Spanish and other languages with the ultimate goal of becoming an enterprise solution for a variety of non-English translations. | In-house & Contractor | Yes | The MT application focuses on translation for existing text and labels into Spanish and other languages with the ultimate goal of becoming an enterprise solution for a variety of non-English translations. | Users can provide custom model to override the vendor-generated default translations. Vendor in turn can learn from the custom models that users provide to improve the model. | No | None of the above | Yes | ||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-77 | Employee Resource Center (ERC) Chatbot | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Natural Language Processing (NLP) | Enable self-service for employees seeking information about various employee-related topics. | This chatbot has proven to reduce the reliance on the ERC staff to answer employee inquiries greatly improving ERC operations. | The NLP algorithm functions by classifying user inputs into predefined topics using advanced natural language processing techniques. It evaluates the input against a set of trained models and assigns a response based on a confidence score calculated by the algorithm. Only responses associated with these pre-established topics are provided, ensuring accuracy and consistency. The algorithm does not generate new or ad-hoc answers outside the defined topics, making it reliable for controlled environments where precision and adherence to predetermined content are critical. | 06/01/2021 | Vendor | No | The NLP algorithm functions by classifying user inputs into predefined topics using advanced natural language processing techniques. It evaluates the input against a set of trained models and assigns a response based on a confidence score calculated by the algorithm. Only responses associated with these pre-established topics are provided, ensuring accuracy and consistency. The algorithm does not generate new or ad-hoc answers outside the defined topics, making it reliable for controlled environments where precision and adherence to predetermined content are critical. | Transcripts from ongoing operations. | No | None of the above | No | |||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-80 | Winnie Chatbot for Employee IT FAQs | Retired | Retired | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | ||||||||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-85 | Synthetic Data Engine | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Classical/Predictive Machine Learning | This AI based synthetic data generator provides synthetic data for a robust, large-scale dataset consistent with US Population Data Heuristics for testing and advanced analytics by dimensionalizing over 180 elements of socioeconomic data and over 300 dimensions, aging people, households and businesses over time, and tightly correlating the synthetic data to actual familial and socioeconomic attributes of US taxpayers. | Generate synthetic data for testing IRS tax processing systems and significantly reduce the risk of exposing taxpayer information while enabling more comprehensive testing of functional test cases. This system can now use to simulate fraud cases for testing negative and positive automated test cases. | AI is capable of outputting 3 tax years of IMF and BMF returns starting in Tax Year 2022. With each XML Schema Definition (XSD) that is released under the Modernized e-File (MeF), system implements each XSD version and provides full regression support for testing in successive years. Each XSD version has automated tests that run to ensure verifications can be run to reduce the likelihood of AI generated synthetic returns that might have data anomalies. Also, to help seed test systems with synthetic individuals and synthetic businesses, system can output the Data Master 1 (DM1) file simulating a feed from Social Security Administration (SSA) and Application for Employer Identification Number file (Form SS-4) containing transcribed form information so that the test system can be initialized with the appropriate backend reference data for validation of IMF and BMF entities, as well as the ability to validate incoming synthetic tax returns. | Vendor | No | AI is capable of outputting 3 tax years of IMF and BMF returns starting in Tax Year 2022. With each XML Schema Definition (XSD) that is released under the Modernized e-File (MeF), system implements each XSD version and provides full regression support for testing in successive years. Each XSD version has automated tests that run to ensure verifications can be run to reduce the likelihood of AI generated synthetic returns that might have data anomalies. Also, to help seed test systems with synthetic individuals and synthetic businesses, system can output the Data Master 1 (DM1) file simulating a feed from Social Security Administration (SSA) and Application for Employer Identification Number file (Form SS-4) containing transcribed form information so that the test system can be initialized with the appropriate backend reference data for validation of IMF and BMF entities, as well as the ability to validate incoming synthetic tax returns. | Over 140 input feeds including US Census, US Bureau of Labor and many other sources, understand relations between variables and find the most relevant with regards to household income levels. divide the population into distinct groups defined by value ranges in sets of variables, generate new population for each group maintaining the frequency distribution for variables. | No | Race/Ethnicity; Sex; Age; Socioeconomic Status; Residency Status; Marital Status; Income; Employment Status | Yes | ||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-94 | AI Contract Document Toolbox | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | Presumed high-impact but determined not high-impact | Not high-impact | AI Contract Document Toolbox drafts procurement narratives, supports vendor evaluations, and aids source selection. It offers recommendations but does not approve contracts. Final decisions are made by agency officials using multiple information sources. | Generative AI | Extensive documentation is required in official contract files when procuring goods and services for agency mission needs. Procurement document drafting is currently largely manual and time-consuming. Further quality reviews of documents add more time to the procurement process and fail to catch some errors. | Generative AI chat reduces the friction in the procurement process by providing automated assistance with drafting and reviewing documents. Procurement professionals can quickly generate first drafts of procurement documents and then refine according to specific agency needs. Further quality reviews occur more quickly when both AI and government staff vets documents. Further, AI can be instructed to incorporate lessons learned, new procurement policy goals, and detect common errors that have occurred in the past. | The AI system generates accurate, contextually relevant, and human readable text outputs based on user input queries and uploaded documents. Its outputs include: Document Summaries., Draft Responses, Content Rewrite, Question-Answer Pairs, Data Extraction, Analysis and Recommendations. | 07/01/2025 | Vendor | Yes | The AI system generates accurate, contextually relevant, and human readable text outputs based on user input queries and uploaded documents. Its outputs include: Document Summaries., Draft Responses, Content Rewrite, Question-Answer Pairs, Data Extraction, Analysis and Recommendations. | Model trained on a large volume of documents from the Internet (estimated ~ 1 trillion words). | Yes | https://www.irs.gov/pub/irs-pia/aicdt-pia.pdf | None of the above | Yes | https://www.irs.gov/pub/irs-pia/aicdt-pia.pdf | |||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-98 | Tax Disclosure Text Clustering | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Natural Language Processing (NLP) | This project takes written data from uncertain tax disclosures, cleans up errors in the text, incorporates additional attachment information not present in the Compliance Data Warehouse (CDW), and groups the disclosures into semantically similar topics. This creates a single source to find uncertain tax disclosure declarations and additional information to facilitate review. The process follows three steps: 1) extraction of text from Form 8275-R, Regulation Disclosure Statement, and Schedule UTP, Uncertain Tax Position Statement, from CDW and supplemental attachments, 2) embedding of text into numeric vectors using the E5-large model, and 3) performing hierarchical clustering on the text vectors to group the disclosures. | The primary goal of this use case is to gain insights useful for improving future examinations and outcomes. | Written descriptions provided by taxpayers disclosing items or positions not otherwise adequately disclosed on a tax return to avoid certain penalties are grouped into semantically similar topics to create clusters. The model produces a table of cleaned tax disclosures and their assigned cluster (numerical label). | In-house & Contractor | Yes | Written descriptions provided by taxpayers disclosing items or positions not otherwise adequately disclosed on a tax return to avoid certain penalties are grouped into semantically similar topics to create clusters. The model produces a table of cleaned tax disclosures and their assigned cluster (numerical label). | Hierarchical Clustering determined by cutting the dendrogram at a user specified height, which in this case represents the minimum cosine distance at which clusters are determined to be distinct. This consisted of evaluation by the subject matter experts (SMEs). One critical aspect to the evaluation involved setting a minimum similarity threshold for the clustering process. We also conducted spot checks on a random sample of groups to ensure consistency. | No | None of the above | Yes | ||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-99 | Employee Retention Credit Text Clustering | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Natural Language Processing (NLP) | Application of Line-Item Consolidation algorithm to identify highly similar text and apply additional features to similar clusters to enable identification of suspicious texts. | This enables the agents to increase their capacity and reduce time researching and identifying potential suspicious trends of topics used in claims. | Detailed explanations from taxpayers as to why they are claiming the Employee Retention Credit (ERC) are grouped into semantically similar topics to create clusters. The ERC is a refundable tax credit for certain eligible businesses and tax-exempt organizations that had employees and were affected during the COVID-19 pandemic. The model produces a table of cleaned tax disclosures and their assigned cluster (numerical label). | In-house & Contractor | Yes | Detailed explanations from taxpayers as to why they are claiming the Employee Retention Credit (ERC) are grouped into semantically similar topics to create clusters. The ERC is a refundable tax credit for certain eligible businesses and tax-exempt organizations that had employees and were affected during the COVID-19 pandemic. The model produces a table of cleaned tax disclosures and their assigned cluster (numerical label). | We the effectiveness in the algorithm in supporting the identification of potentially risky clusters. We did this in two ways, the first was to identify clusters that had a high proportion of additional risk flags, since similar text indicates that text is potentially coming from a single source group, the rest of the cluster is also a target for review. The other method was to identify groups with a large number of Preparer EINs. This indicates that the preparer ID is missing, but they may have a shared source. One critical aspect in the evaluation process involved setting a minimum similarity threshold for the clustering process. We also conducted spot checks on a random sample of groups to ensure consistency, where we found that the text within each group was predominantly consistent, highlighting the algorithms’ ability to group semantically similar descriptions. | No | None of the above | Yes | ||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-103 | Safeguards | Pre-Deployment - The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | The Office of Safeguards needs an AI assistant that can access its documents (e.g., IRM, Publication 1075, SOPs [PDFs/Word Docs], configuration checklists [excel], NIST documents [PDF] to generate responses and products. Ideally, the AI will be able to provide researched responses to questions employees may have. Additionally, it would also be able to review agency and IRS responses on documents (docx, xlsx) to ensure they're appropriate. The documents may be up to 400 pages. Currently all of this is a manual process. | Quicker response time and better quality responses | Access its documents (e.g., IRM, Publication 1075, Standard Operating Procedures (SOPs) [PDFs/Word Docs], configuration checklists [excel], National Institute of Standards and Technology (NIST) documents [PDF] to generate responses and products. Ideally, the AI will be able to provide researched responses to questions employees may have. | Access its documents (e.g., IRM, Publication 1075, Standard Operating Procedures (SOPs) [PDFs/Word Docs], configuration checklists [excel], National Institute of Standards and Technology (NIST) documents [PDF] to generate responses and products. Ideally, the AI will be able to provide researched responses to questions employees may have. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-104 | IRS IaaP Accelerator ( AI-Powered Infrastructure as a Product Accelerator) | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | High-impact | High-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Agentic AI | Federal agencies face slow, manual, and error-prone infrastructure delivery: Cloud adoption is blocked by governance bottlenecks (NIST, FedRAMP, IRS 1075). Compliance evidence is manual and audit-heavy. Developers struggle with multi-cloud complexity and vendor lock-in. Security drifts accumulate until quarterly reviews, leading to delayed taxpayer service delivery. | The IRS IaaP Accelerator embeds AI Personas (Architect, Engineer, Product Manager, Security Generative Pre-trained Transformer (GPT)) into the delivery pipeline: Advisory (Level 1) – Personas draft architectures, Infrastructure as Code (IaC) modules, and guardrails. Semi-Autonomous (Level 2) – AI validates IaC pull requests, explains security findings, updates backlogs, and posts PR comments. Autonomous (Level 3) – Drift detection triggers cloud-based AI. Cloud-based AI proposes Terraform diffs and opens remediation PRs. Compliance-as-Code validates automatically. Human remains in loop for approvals. Model Context Protocol (MCP) servers allow these personas to work across multi-cloud environments without retraining or rewriting — AI sees one normalized contract, regardless of provider. 80% faster remediation of drift and policy violations. Always-compliant infrastructure with auditable AI-generated evidence. Secure-by-default modernization across multiple clouds, aligned to federal mandates. Human-in-the-loop AI ensures governance while accelerating delivery. | The accelerator delivers recommendations, explanations, generated artifacts, and bounded decisions across L1–L3, with human-in-the-loop accountability at all levels. Recommendations: Architecture GPT suggests reusable cloud patterns, multi-cloud designs, and interoperability strategies. Engineer GPT recommends IaC and pipeline improvements. Product Manager GPT prioritizes backlog items using adoption and security signals. Security GPT proposes compliance guardrails, tagging, and remediation actions. Explanations: AI generates plain-language PR comments for failed compliance checks, explains detected drift and guardrail violations, and produces audit-ready narratives combining telemetry and policy outputs. Generated Artifacts: Outputs include IaC code stubs, CI/CD templates, draft policy files, backlog items (epics, features, user stories), and executive-ready presentation content. Predictions (Limited): Using telemetry and backlog trends, the system provides lightweight narrative forecasts on adoption and compliance risks, relying on pattern detection rather than statistical ML. Decisions and Actions: At L2, AI validates IaC pull requests, enforces guardrails, and synchronizes backlogs. At L3, it opens remediation PRs, proposes drift fixes, routes events to observability tools, and initiates bounded actions. All destructive changes require human approval. | The accelerator delivers recommendations, explanations, generated artifacts, and bounded decisions across L1–L3, with human-in-the-loop accountability at all levels. Recommendations: Architecture GPT suggests reusable cloud patterns, multi-cloud designs, and interoperability strategies. Engineer GPT recommends IaC and pipeline improvements. Product Manager GPT prioritizes backlog items using adoption and security signals. Security GPT proposes compliance guardrails, tagging, and remediation actions. Explanations: AI generates plain-language PR comments for failed compliance checks, explains detected drift and guardrail violations, and produces audit-ready narratives combining telemetry and policy outputs. Generated Artifacts: Outputs include IaC code stubs, CI/CD templates, draft policy files, backlog items (epics, features, user stories), and executive-ready presentation content. Predictions (Limited): Using telemetry and backlog trends, the system provides lightweight narrative forecasts on adoption and compliance risks, relying on pattern detection rather than statistical ML. Decisions and Actions: At L2, AI validates IaC pull requests, enforces guardrails, and synchronizes backlogs. At L3, it opens remediation PRs, proposes drift fixes, routes events to observability tools, and initiates bounded actions. All destructive changes require human approval. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-105 | Qualitative Text Classification for Analysis | Pre-Deployment - The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | The AI tool, leveraging internal large language models, is intended to increase the operational efficiency of qualitative research within Small Business / Self-Employed (SB/SE) Research, increase research scope, complexity, and decrease overall time to customer. | Leveraging the generative AI applications within the large language model will allow analysts to benefit from an automated secondary analysis, improving overall reliability of analytical deductions and validating manually derived analytical outputs. | The AI system's output combines with the non-AI derived manual analysis, extrapolating based on manually derived analysis and using unsupervised and supervised methodologies to correlate data into classes. Essentially, we are taking unstructured data and using various inputs, we are leveraging the AI to classify and structure the data to assist in mass text analysis. | The AI system's output combines with the non-AI derived manual analysis, extrapolating based on manually derived analysis and using unsupervised and supervised methodologies to correlate data into classes. Essentially, we are taking unstructured data and using various inputs, we are leveraging the AI to classify and structure the data to assist in mass text analysis. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-109 | Forms Conversion | Pre-Deployment - The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | The AFCS tool assists the team with working a higher volume of form conversion than we can sustain manually due to the loss of staff due to Deferred Resignation Program (DRP) and retirements. | The AFCS tool saves significant time in the authoring process by providing an initial draft version of the form. | The AFCS tool provides an initial form version along with the schema for the form. | The AFCS tool provides an initial form version along with the schema for the form. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-114 | Zero Paper AI Routing for Digitalization | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Agentic AI | The Zero-Paper Initiative (ZPI) aims to eliminate the flow of paper into the IRS. As a result, all non-tax, mail submissions from taxpayers will be scanned through external vendors to enable the routing of digital documents and their associated data to the IRS for further processing/review. The digital documents and associated data will be routed downstream to various business units and processing systems based on the form type and other similar criteria. Leverages AI to make routing decisions at various levels of complexity as well as using AI to support routing at record processing time or as a tool to generate routing logic dictionaries. | The expected benefits are: Automated routing: to instantly analyze form type or other characteristics to determine where files should be sent, removing delays from manual handling; Reduced bottlenecks: Files are less likely to get stuck in queues or misrouted, which speeds up workflows and improves efficiency; Pattern recognition: The system can be trained to recognize patterns in file characteristics and ensure they’re routed consistently according to rules or learned behaviors; Error reduction: the system will help reduce misfiling and simplify logic such as duplication prevention; Handling large volumes: The system will be able to scale, allowing it to handle larger workflows which would require additional efforts otherwise; Dynamic adaptation: The system will be able to easily adjust to the addition or removal of downstream systems or changes in routing processes without requiring significant system overhauls. | The output is the respective route or downstream system each digital package should be routed to. | The output is the respective route or downstream system each digital package should be routed to. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-117 | Modular Code Assistant (Explain & Refactor Small Methods) for Automation Test Application Development | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | Assist test automation team by explaining or suggesting improvements to individual Java methods or classes below 100 lines. Ideal for improving understanding and accelerating onboarding to automation test as a new software engineer in application development. | The objective is to implement a secure, localized AI chatbot leveraging Retrieval-Augmented Generation (RAG) to enhance test automation productivity. | Answer detailed project-specific queries. Interpret and simplify complex automation test reports. Potentially aid in troubleshooting and debugging automation scripts. | 09/03/2025 | In-house | No | Answer detailed project-specific queries. Interpret and simplify complex automation test reports. Potentially aid in troubleshooting and debugging automation scripts. | All the automation test data is synthetically generated in programming/development phase for testing the APIs request is successful in lower environment (SBX/DEV) only. | No | None of the above | No | |||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-124 | Digitalization AI Agent | Pre-Deployment - The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | The AI will solve functions involved with the large inventory of disparate digitalization projects and systems that make them difficult to organize. | The AI will assist with document organization, recall, and sorting. The AI will reduce errors and save time by automating form filling and recalling information. The AI can quickly analyze large datasets, revealing trends, risks, opportunities, and providing summaries. The AI can assist in managing tasks prioritizing workloads, suggest next actions, and helping employees focus on high-value tasks. The AI-powered search can quickly find relevant internal documents, policies, or best practices. In summary, the AI will save time, effort, and complete tasks more efficiently. | The output will be Digitalization section information that are the results of AI-powered searches as well as insights and recommendations. | The output will be Digitalization section information that are the results of AI-powered searches as well as insights and recommendations. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-125 | Integrated Financial Management Information System (IFMIS) | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | This initiative enables quantitative decision-making that supports improved budget forecasting, formulation, planning, execution, and reporting based on approved methodologies. | The outcome will be a deeper understanding of IT interdependencies and integrated processes and systems. | Recommended solutions based on various scenarios considering current budget, forecasted demand, pending legislation, projects in various lifecycle stages, statutory requirements, high-impact deliverables, risk, and the various trade-offs necessary for leadership to make the most informed choice. Improved communications and workflow to expedite and speed up decision making and delivery within in the IT Business Unit. | Recommended solutions based on various scenarios considering current budget, forecasted demand, pending legislation, projects in various lifecycle stages, statutory requirements, high-impact deliverables, risk, and the various trade-offs necessary for leadership to make the most informed choice. Improved communications and workflow to expedite and speed up decision making and delivery within in the IT Business Unit. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-126 | Integrated Data Retrieval System Modernization | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Other | IDRS currently operates on a mainframe with millions of lines of COBOL, Assembler, and C++. The AI tool automates code conversion to Java and migrates data to relational databases, enabling and accelerating modernization, reducing cost, risk, and technical debt. | Decommission mainframe by 2028. Reduce reliance on shrinking COBOL workforce availability and expertise. Improve scalability and maintainability. Enable integration with modern IRS systems (UAPI, EDP). | Converted Java code from legacy COBOL. Translated database schemas (from DMS 2200 to RDBMS). Technical reports on code structure, dependencies, and data models. | Converted Java code from legacy COBOL. Translated database schemas (from DMS 2200 to RDBMS). Technical reports on code structure, dependencies, and data models. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-130 | Treasury Readability | Pre-Deployment - The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | Government contract descriptions are indecipherable to the average American. Treasury Acquisition Procedures Update No. 25-05 requires that contract descriptions in the Federal Procurement Data System provide a clear and concise plain language description of products or services acquired under a specific contract. Under the Federal Funding Accountability and Transparency Act (FFATA) et seq, the IRS must publish contract spending reports for viewing by policymakers and the public. The IRS has previously funded and published research on the use of models to validate the accuracy of structured contract spending data such as dates, dollar amounts, and addresses. Advances in large language models have made it feasible to automatically assess the quality of unstructured text descriptions of contract purchases. This coding provides automated suggestions from a Large Language Model that outlines failings and details suggestions on how contract descriptions can be improved. AI prompts are used to evaluate the quality of contract descriptions on an adjectival rating scale and provide narrative feedback on how to improve descriptions. | Help Treasury/IRS improve the quality of contract description language and increase transparency with agency contract spending. Evaluate the quality of contract spending descriptions on an adjectival rating scale. Generate narrative feedback on how to improve specific contract descriptions using AI. Generate reporting quantifying trends in contract description quality (e.g. improvement or decline in data quality). | The developer will design prompts and deliver responses from an open-weight, open-source Large Language Model (LLM) to the client using an R-based workflow. Prompt refinement and model outputs will be iteratively reviewed and adjusted until the client confirms satisfaction with the responses. Example Model Outputs: Rating: Acceptable. The description “THOMSON REUTERS ANNUAL TAXATION SUBSCRIPTION” clearly identifies a subscription-based contract with Thomson Reuters. While concise, it lacks details about the specific services included, intended users, and usage context, which would improve clarity and understanding of the contract scope. Rating: Marginal. The description “BPA SET-UP FOR ISS ENTERPRISE SERVICE MANAGEMENT SOLUTION SERVICES” is unclear and relies on unexplained acronyms, which may confuse readers. It also omits key context such as the purpose of the service, intended users, and usage location, making the contract scope difficult to interpret and resulting in a marginal readability assessment. | The developer will design prompts and deliver responses from an open-weight, open-source Large Language Model (LLM) to the client using an R-based workflow. Prompt refinement and model outputs will be iteratively reviewed and adjusted until the client confirms satisfaction with the responses. Example Model Outputs: Rating: Acceptable. The description “THOMSON REUTERS ANNUAL TAXATION SUBSCRIPTION” clearly identifies a subscription-based contract with Thomson Reuters. While concise, it lacks details about the specific services included, intended users, and usage context, which would improve clarity and understanding of the contract scope. Rating: Marginal. The description “BPA SET-UP FOR ISS ENTERPRISE SERVICE MANAGEMENT SOLUTION SERVICES” is unclear and relies on unexplained acronyms, which may confuse readers. It also omits key context such as the purpose of the service, intended users, and usage location, making the contract scope difficult to interpret and resulting in a marginal readability assessment. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-30 | Modernization Accelerator - Legacy Applications Chatbot & Code Conversion | Retired | Retired | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | ||||||||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-31 | Modernization Accelerator | Retired | Retired | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | ||||||||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-35 | Data Integration using Informatica Data Management Cloud | Retired | Retired | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | ||||||||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-37 | Resume Scanner | Retired | Retired | High-impact | High-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | ||||||||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-4 | Line-Item Consolidation for Form 1120 | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Natural Language Processing (NLP) | Large Business & International (LB&I) partnerships have a variety of forms and statements that include text. Filings include semi-structured attachments to returns that consist of many short free text labels paired with dollar amounts. Short text labels that include information like descriptions, addresses, and alphanumeric type codes. Non-standard labels are difficult to structure and interpret unstructured narrative text, ranging from a sentence to entire paragraph-based documents, that taxpayers provide to explain, justify, or document a position. This is a research project. The goal of this project is to identify opportunities to derive value from the unstructured text. Assess potential value to examiners from consolidation across labeled pairs, utilize consolidated line items for outlier analysis, etc. | Line-item consolidation is a crucial step in providing value to examiners through the grouping of numerous free-text responses into common, recognizable groups that allow for further analysis and exploration. Not only do these line-items become more accessible to examiners, but it facilitates a more robust benchmarking analysis of line-item amounts by creating larger groupings of common line-items. Success in the line-item consolidation stage was defined by how effectively each stage of the text consolidation pipeline could reduce the number of unique line-item descriptions while maintaining an appropriate level of specificity. | A Form 1120 Other Deduction line-item description is assigned or consolidated to a consolidation group, which is either a subject matter expert defined deduction category or a deduction description existing in the dataset, based on how semantically similar the descriptions are, measured by cosine similarity. The model produces an additional column appended to the original dataset that contains the consolidation group (text string) that the line-item description (row) was matched to. | In-house | Yes | A Form 1120 Other Deduction line-item description is assigned or consolidated to a consolidation group, which is either a subject matter expert defined deduction category or a deduction description existing in the dataset, based on how semantically similar the descriptions are, measured by cosine similarity. The model produces an additional column appended to the original dataset that contains the consolidation group (text string) that the line-item description (row) was matched to. | The team evaluated the appropriateness of assignments in the top 500 most frequent and material groups in order to further quantify the accuracy of consolidation. Original descriptions were manually reviewed within their respective assigned consolidated groups and a judgment was made on whether the description fit within the rest of the group or not. We also evaluated the amount of consolidation that our updated algorithm could perform. On a test of all 1120 Other Deductions in CDW, the algorithm was able to consolidate nearly 4 million unique line-item descriptions into 253 groups. | No | None of the above | Yes | ||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-40 | Taxpayer 90 | Retired | Retired | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | ||||||||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-42 | Ticket Management Generative AI Pilot | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | As part of a Chief Technology Officer (CTO) sponsored objective to enable and utilize Generative Artificial Intelligence (Gen AI) capabilities the User and Network Services Information Technology (IT) Service Desk in cooperation with Enterprise Operations will be implementing a limited production challenge to support AI generated Incident Summarization; providing agents with concise summary of case notes and history, Resolution Notes generation; generating accurate resolution notes based on actions taken and solution achieved, and Knowledge Article Generation; generating complete knowledgebase articles base on incident or case records. | We expect to see significant time savings when conducting warm handoffs between teams, while reducing the mean time to restore incidents. Additionally, the effort should ensure high-quality resolution notes are generated to assist in solving future incidents, while feeding existing knowledge review and publication processes potentially empowering self-service capabilities which will reduce time to closure activities. | Incident Summarization; providing agents with concise summary of case notes and history, Resolution Notes generation; generating accurate resolution notes based on actions taken and solution achieved, and Knowledge Article Generation; generating complete knowledgebase articles base on incident or case records | Vendor | Yes | Incident Summarization; providing agents with concise summary of case notes and history, Resolution Notes generation; generating accurate resolution notes based on actions taken and solution achieved, and Knowledge Article Generation; generating complete knowledgebase articles base on incident or case records | IRS Service Central Employee portal searches results and summaries, Virtual Agent interactions/Multiturn Catalog Integration, and ITSM input driving Incident Summarization, Resolution Note Generation, Knowledge Article Generation. | Yes | None of the above | No | ||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-46 | GenAI for Low Code Platform | Pre-Deployment - The use case is in a development or acquisition status. | Human Resources | Pre-deployment | High-impact | High-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | Leveraging AI product in the HCO-WPA Environment can significantly enhance the efficiency and accuracy of applications development while improving user outcomes. By automating repetitive tasks and providing intelligent suggestions, product simplifies app building, reducing development time and minimizing errors. For scenarios like delivering pay period updates, successors, and workforce demand, product ensures timely and accurate dissemination of information by generating automated workflows and insights. This not only limits human error but also accelerates the process, enabling users to receive updates quickly and consistently. | The integration of AI into Workforce Planning and Succession Planning will provide significant benefits by enabling more accurate forecasting, data-driven insights, and proactive decision-making. AI can identify workforce trends, skill gaps, and attrition risks earlier, allowing leaders to align resources with future mission needs. By automating complex analysis and generating actionable recommendations, AI supports more efficient planning, ensures a stronger pipeline of ready successors, and enhances organizational resilience. The positive outcome is a more agile, informed, and future-ready workforce that can adapt quickly to evolving priorities and sustain mission success. | The AI system outputs are designed to provide intelligent, context-aware results tailored to user needs. These outputs can include recommendations, insights, predictions, forecasting, or automated actions based on data analysis, natural language understanding, and machine learning models. The system ensures accuracy, relevance, and clarity, enabling users to make informed decisions, streamline workflows, and achieve desired outcomes efficiently. | The AI system outputs are designed to provide intelligent, context-aware results tailored to user needs. These outputs can include recommendations, insights, predictions, forecasting, or automated actions based on data analysis, natural language understanding, and machine learning models. The system ensures accuracy, relevance, and clarity, enabling users to make informed decisions, streamline workflows, and achieve desired outcomes efficiently. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-48 | Statistical Information Services Customer Correspondence Analytics | Pre-Deployment - The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | Statistics of Income (SOI) has collected years' worth of customer correspondence. While having this information has been useful for historical documentation proposes, it is apparent that moving forward we need to evaluate the trends in requests, identify areas of demand, topics of interest, and areas for internal improvement. Through the use of AI, we intend to summarize and evaluate the previous several years' worth of correspondence to better direct our limited resources moving forward. The output will help us better realign our resources to more highly impactful tasks and assignments. It will also help direct where improvements to current products need to be made and what those improvements ought to be to better inform the public. | Improve customer service by having SOI information more easily understandable and available. | The output of the AI system will be summary evaluations, via [name removed] capabilities, of the past several years of customer correspondence. | The output of the AI system will be summary evaluations, via [name removed] capabilities, of the past several years of customer correspondence. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-54 | 1040X AI Transcription | Pre-Deployment - The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Computer Vision | This project is meant to improve IRS capabilities to extract information from images and pdfs. | In the f1040x case, this tool will help clear a backlog of scanned, f1040x returns. Generally, the tools will improve the efficiency of various stakeholders within the IRS so that taxpayers receive better service. | The system takes in images, classifies them based on what IRS form they are, and then extracts requested information which is then stored in a table. | The system takes in images, classifies them based on what IRS form they are, and then extracts requested information which is then stored in a table. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-56 | Accounts Management “Balance Due” Call Transcript Classification | Pilot - The use case has been deployed in a limited test or pilot capacity. | Service Delivery | Pilot | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Natural Language Processing (NLP) | This project aims to sort customer call transcripts into groups based on the issues customers need help with. | The AI system will inform research and decision making to enhance the customer experience. | The AI system will output categories, classifying call transcripts. | 09/09/2024 | In-house | No | The AI system will output categories, classifying call transcripts. | Two datasets are used to fine-tune the model. The first dataset is the “NXT Switchboard Annotations”. This dataset has 260 hours of speech where researchers manually annotated calls for syntactic structure and disfluencies. Fine tuning is used to clean the call transcriptions. For the second dataset, project members manually labeled a dataset to train a classifier to identify and remove the identity verification portion of the call. | Yes | None of the above | Yes | https://github.com/MaartenGr/BERTopic, https://github.com/pytorch/pytorch, https://radimrehurek.com/gensim/index.html | ||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-59 | Identity Verification | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Classical/Predictive Machine Learning | Improves identity document verification process for IRS users who complete the registration process. The expected benefit of this AI use case is to detect submissions of forged identity documents. | Improvements to identity verification process to enable taxpayers to access IRS online services. | Confidence threshold based on output from multiple machine learning models that result in a binary fraud indication, True/False; Indication of fraud is confirmed with human review. | Vendor | Yes | Confidence threshold based on output from multiple machine learning models that result in a binary fraud indication, True/False; Indication of fraud is confirmed with human review. | Proprietary to the vendor | No | None of the above | No | ||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-6 | Robotic Process Automation for Form 941-X Extraction | Retired | Retired | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | ||||||||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-61 | Procurement Data Transparency, Reporting, & Decision Tracking | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Natural Language Processing (NLP) | Platform provides an analytics hub for contract documents and contract spending metadata. Generative AI is used to produce a summary of the products and/or services acquired under each agency contract. | Improve contracting spending category management and better ability to identify and analyze contracted products and services purchased by the agency. | System outputs spreadsheet or table style information with AI generated results. AI generated output is provided to contracting officials for review. | 04/11/2025 | Vendor | Yes | System outputs spreadsheet or table style information with AI generated results. AI generated output is provided to contracting officials for review. | Contract document PDFs are summarized using an AI model. An AI generated summary of products or services purchased under a contract is saved into a tabular, spreadsheet table format. | No | None of the above | Yes | |||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-62 | Legislative Analysis Tool | Pilot - The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | The Legislative Analysis tool is a comprehensive tool for tracking, analyzing, and managing legislative and regulatory changes. It provides the following benefits: Real-time updates on legislative developments at federal, state, & local levels; Robust impact analysis features to assess how legislative changes affect your organization; A comprehensive database of legislative documents, making it easy to search & access relevant information; A user-friendly interface & collaboration features enhance team productivity & compliance management | Improve efficiency | The outputs include the following: summaries of the chosen section of the bill; the # of impacted IT Systems and the estimated level of effort, per section; the forms outlined in each section, and their respective reasoning for impact. | In-house | No | The outputs include the following: summaries of the chosen section of the bill; the # of impacted IT Systems and the estimated level of effort, per section; the forms outlined in each section, and their respective reasoning for impact. | No agency data is used to train the model | No | None of the above | Yes | ||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-63 | F1023 Hybrid NLP and ML Pipeline | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Natural Language Processing (NLP) | The model attempts to predict non-compliance among tax-exempt entities by leveraging embedding-based feature extraction feeding a downstream classifier. | F1023 and F990 contain natural text fields around activity descriptions, mission or purpose statements making it difficult for Tax Exempt Government Entities (TEGE) to detect non-compliance among the population of Exempt Organizations (EOs). This use case provides improved ability to predict non-compliance. | This solution will address multiple F1023 strategies in TEGE and the output will be tailored to improve operational efficiency for each strategy. | This solution will address multiple F1023 strategies in TEGE and the output will be tailored to improve operational efficiency for each strategy. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-64 | Integrate Word Processor AI for Digitalization | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Classical/Predictive Machine Learning | The intended purpose of the Digitalization Enablement Platform (DEP) and the integration of AI into processing taxpayer submitted documents is to minimize the transcription and storage of paper documents, streamline case management, and enhance the accuracy and efficiency with which taxpayer information is extracted, organized, and made accessible. The expected benefits include a massive reduction in physical storage requirements, cost savings, improved data security, a reduction in backlogged cases, and improved service. | Use of this system automates what formally required IRS staff to transcribe data form taxpayer submitted forms. This OCR automation, allows for the faster processing of taxpayer submissions and for efficiencies of IRS staff time. | The AI will extract taxpayer information from IRS forms agnostic of any demographic bias, in accordance with federal AI use regulations and standards. | 08/12/2025 | In-house & Contractor | Yes | The AI will extract taxpayer information from IRS forms agnostic of any demographic bias, in accordance with federal AI use regulations and standards. | The project team created a Synthetic Data Generator (SDG) to generate sample IRS completed forms in order to train the AI models. Approximately 50 to 100 synthetically generated form are used to both train and test the AI models. | No | None of the above | Yes | |||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-65 | AI Voiceover Generation for eLearning Development | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | Voiceover generation for training courses. With return-to-office mandates in place we are unable to record narration voiceovers for training courses as we have historically done using live employees and microphone recording kits. | Increased quality, a more diverse selection of voices, much more rapid voiceover generation and revision, increased development efficiency. | Narration scripts are entered into the web-based interface, which outputs .wav audio files for import to our eLearning authoring applications. Scripts are free from Personally Identifiable Information/Federal Taxpayer Information (PII/FTI) and adhere to standard training development protocol (fictitious names and addresses, etc.). | Vendor | No | Narration scripts are entered into the web-based interface, which outputs .wav audio files for import to our eLearning authoring applications. Scripts are free from Personally Identifiable Information/Federal Taxpayer Information (PII/FTI) and adhere to standard training development protocol (fictitious names and addresses, etc.). | Narration scripts used to generate voiceover audio. | No | None of the above | No | ||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-68 | Enterprise Systems Testing Generative Pre-Trained Transformer (EST Knowledge Source and Documentation AI Tool) | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | The EST GPT project is intended to solve the problem of inefficient and fragmented access to organizational knowledge by creating a centralized, AI-powered repository for documentation, manuals, and legacy data. Currently, employees face delays and inconsistencies when retrieving critical information, leading to wasted time, higher costs, and potential knowledge gaps, particularly during workforce transitions. By leveraging AI, EST GPT will enable quick, accurate, and user-friendly lookup of information, reducing wait times and improving access across the organization. This approach also mitigates the risk of knowledge loss by preserving institutional expertise in a structured, searchable format. As a result, the tool will not only enhance efficiency and decision-making but also deliver cost savings by minimizing duplication of effort and reducing reliance on individual knowledge holders. Ultimately, EST GPT addresses the challenge of knowledge management by ensuring information is always accessible, reliable, and ready to support data-driven operations. | The expected benefits of EST GPT for the agency’s mission and the general public include significantly improved efficiency, cost savings, and enhanced access to critical knowledge. By streamlining navigation across the wide array of testing tools, the AI tool reduces reliance on Subject Matter Experts (SMEs), empowering teams to independently discover and apply insights, which in turn minimizes user wait times and bottlenecks. EST GPT accelerates onboarding by shortening the learning curve, ensuring new staff quickly become productive, and preserving institutional knowledge to prevent disruption during workforce transitions. Its ability to rapidly retrieve information from documentation ensures timely support, reduces delays in decision-making, and sustains project momentum. These improvements not only drive operational efficiency and reduce costs associated with duplicated effort and prolonged training but also strengthen the agency’s ability to deliver reliable, data-driven outcomes. Ultimately, EST GPT supports mission success by safeguarding knowledge and enabling faster, more informed public service delivery. | The outputs of EST GPT (EST Knowledge Source and Documentation AI Tool) are AI-powered responses and insights that make organizational knowledge more accessible and actionable. It delivers rapid information retrieval from documentation and manuals and provides clear explanations that reduce reliance on Subject Matter Experts. The tool generates outputs that accelerate user onboarding by enhancing understanding of tool functionalities, while also preserving institutional knowledge by maintaining access to historical documentation and legacy data. These outputs ensure timely support, improved efficiency, reduced user wait times, and sustained project momentum across EST’s testing processes. | The outputs of EST GPT (EST Knowledge Source and Documentation AI Tool) are AI-powered responses and insights that make organizational knowledge more accessible and actionable. It delivers rapid information retrieval from documentation and manuals and provides clear explanations that reduce reliance on Subject Matter Experts. The tool generates outputs that accelerate user onboarding by enhancing understanding of tool functionalities, while also preserving institutional knowledge by maintaining access to historical documentation and legacy data. These outputs ensure timely support, improved efficiency, reduced user wait times, and sustained project momentum across EST’s testing processes. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-69 | Using Text Summarization for Open-Ended Survey Responses | Pre-Deployment - The use case is in a development or acquisition status. | Science | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | Analyzing open-ended survey responses is very time intensive. This project seeks to reduce the time that analysis takes. | Open-ended survey responses can capture more nuanced information than closed response questions. Using AI to summarize these responses will enable Small Business/Self-Employed (SB/SE) Research to better leverage open-ended responses to provide our customers with more nuanced information. | The AI will output a summary of all the open-ended responses. This summary will capture all the main themes and important details of the open-ended responses. | The AI will output a summary of all the open-ended responses. This summary will capture all the main themes and important details of the open-ended responses. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-7 | A6020(b) Combined Annual Wage Reporting Automation | Retired | Retired | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | ||||||||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-72 | TP360 IRM Research Companion | Pre-Deployment - The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | AI is to assistant CSR as Knowledge Management Librarian for natural language querying to find solutions, improving IRM research thus minimizing time spent and redundant research. Currently CSR do a basic search on SERP https://serp.enterprise.irs.gov/homepage.html and have to browse through the results to find tax related information in IRM while taxpayer is on phone. | AI chatbots can deliver instant responses to routine inquiries, significantly reducing wait times and improving user satisfaction. Chatbots enhanced with RAG can pull responses from live documents or knowledge bases, improving grounding and accuracy | Provide concise and precise answer to the query and provided IRM references for further investigation if needed | Provide concise and precise answer to the query and provided IRM references for further investigation if needed | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-75 | Procurement Research | Pre-Deployment - The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | Presumed high-impact but determined not high-impact | Not high-impact | Research effort may examine AI bias (e.g. large vs. small business) in drafting procurement documents as market research / proposal evaluation. Research effort and not an operational use case where AI will make or influence government decisions. | Generative AI | Generative AI is new to the acquisition workforce. User research is needed to determine how to make AI-powered procurement systems easy-to-use, highly functional, and accurate for agency employees performing procurement tasks. Further, there is little research on which AI models are most performant for federal procurement tasks. An objective evaluation is needed to compare commercial AI models and possibly fine-tune or otherwise improve AI models for the federal procurement domain. | America’s AI Action Plan, dated July 2025, calls AI procurement toolbox managed by the General Services Administration (GSA), in coordination with OMB, that facilitates uniformity across the Federal enterprise to the greatest extent practicable. This system would allow any Federal agency to easily choose among multiple models. Agencies should also have ample flexibility to customize models to their own ends, as well as to see a catalog of other agency AI uses. Treasury is well-positioned to make a significant contribution and advance the state of the art in AI models used for procurement tasks. | Generative AI outputs narrative for procurement documents. | Generative AI outputs narrative for procurement documents. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-76 | AI-Augmented Software Modernization and Insight | Pilot - The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | The IRS maintains a large portfolio of complex Java/Spring Boot applications that accumulate technical debt, dependency conflicts, and inconsistent coding practices. Manually identifying outdated libraries, researching upgrade paths, and resolving build failures consumes significant engineering time and slows modernization efforts. AI is intended to reduce this burden by automating research, detecting vulnerabilities, and proposing secure, standards-compliant code improvements. | For the IRS mission: Faster modernization of taxpayer-facing systems, improved system reliability, and reduced risk from outdated or insecure software components. For the public: More resilient digital services, reduced downtime, and faster delivery of secure taxpayer tools and features. For IRS engineers: Less time spent on repetitive research, freeing focus for high-value mission logic and innovation. | Prioritized lists of technical debt items (e.g., outdated dependencies, deprecated APIs, security vulnerabilities). Suggested code refactorings and dependency upgrades. Generated draft documentation, test cases, and migration strategies. Analytical reports highlighting cross-repository trends in build failures and performance issues. | In-house | Yes | Prioritized lists of technical debt items (e.g., outdated dependencies, deprecated APIs, security vulnerabilities). Suggested code refactorings and dependency upgrades. Generated draft documentation, test cases, and migration strategies. Analytical reports highlighting cross-repository trends in build failures and performance issues. | Training: The model was trained on publicly available, licensed, or permissively shared text, code, and documentation, supplemented by human-authored examples for language, reasoning, and coding tasks. It was not trained, fine-tuned, or evaluated on IRS or other proprietary government systems, taxpayer data, or TaxPro code. All datasets were filtered to remove PII, secrets, and sensitive financial data in accordance with privacy and security policies. Fine-Tuning and Alignment: The model underwent supervised instruction tuning using de-identified and synthetic examples focused on software engineering, documentation, and compliance use cases. Reinforcement learning from human feedback improved quality, clarity, and policy adherence. Domain calibration relies on open-source Java/Spring projects, Maven build patterns, and general DevSecOps documentation, with no IRS-specific content. Evaluation and Governance: Performance was assessed using quantitative benchmarks for reasoning, code generation, and factual accuracy, including datasets such as HumanEval and MMLU, along with qualitative reviews for ethical, security, and accuracy standards. Periodic bias, privacy, and red-team testing support AI governance. All datasets undergo licensing and PII review. Operational prompts, logs, and outputs are not used to retrain the base model. Model documentation and provenance support auditability and align with federal AI risk-management frameworks, including NIST AI RMF 1.0. | No | None of the above | Yes | ||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-79 | Resume Ranking Engine | Pre-Deployment - The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | Career alignment problem by bridging the gap between employees and job opportunities. Employees struggle to understand how well their resume aligns with specific job roles. AI will help to analyze the resume and job descriptions automatically. Rank roles by fit score so employees can know where to focus. Highlight gaps in skills or experience with actionable suggestions. Provide section by section scoring so the employee will know exactly what needs improvement. | Increased efficiency, higher accuracy, transparency, equity and career growth. | Ranked list of roles, Section level scores, Skill gap list, Actionable suggestions, JavaScript Object Notation (JSON) payload and exportable Portable Document Format (PDF) report | Ranked list of roles, Section level scores, Skill gap list, Actionable suggestions, JavaScript Object Notation (JSON) payload and exportable Portable Document Format (PDF) report | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-84 | AI-Powered Enterprise Standard Portfolio (ESP) for End-to-End Technology Lifecycle | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | The core problem the AI is intended to solve lies in the fragmented and highly manual nature of how commercial off-the-shelf technologies are managed throughout their life cycle. Today, users submit requests for new software or hardware, but determining whether the request duplicates existing tools, whether the technology is still supported, or whether it fits enterprise standards often depends on tribal knowledge, scattered documentation, or siloed teams. This leads to long cycle times, inconsistent risk evaluations, and limited visibility into how many licenses or subscriptions are actually in use. Organizations struggle to manage active and inactive instances across both on-premise and cloud-based environments, making it difficult to optimize costs or ensure that end-of-life and upgrade paths are handled consistently. The lack of seamless hand-offs across cataloging, approval, procurement, deployment, and monitoring workflows contributes further to inefficiency and risk. | The expected benefits of introducing AI into this process are significant. By providing automated triage and context-aware answers to common questions about request procedures and timelines, AI can shorten decision cycles and improve user satisfaction. Automated checks against enterprise standards and authoritative catalogs ensure duplication and compliance issues are caught early, while lifecycle and vulnerability monitoring reduces the chance of security exposures. Cost efficiency improves as AI correlates license usage data with actual activity, identifying inactive instances and recommending right-sizing actions. Standardized analyses and automatically generated approval packets improve auditability, while consistent tracking of end-of-life timelines and upgrade requirements reduces the likelihood of unexpected outages. Ultimately, the mission outcome is better: users gain timely access to the technologies they need, while the organization maintains governance, security, and fiscal responsibility. | The outputs of the AI system take several forms. On the conversational side, it delivers plain-language guidance about processes, expected timelines, and relevant policies. On the analytical side, it generates recommendations about whether a request is a new product, an upgrade, or already covered by an existing enterprise standard. It produces risk and readiness scores that summarize vulnerabilities, lifecycle status, duplication concerns, and cost implications. The system also generates structured artifacts such as intake forms, change analysis documents, evaluation plans, and procurement justifications that can flow directly into existing approval or tracking systems. In operations, it outputs monitoring dashboards and alerts that highlight unused licenses, upcoming end-of-life dates, or pending upgrades. Each of these outputs is designed to be human-reviewable, with citations and evidence provided to support trust and transparency. | The outputs of the AI system take several forms. On the conversational side, it delivers plain-language guidance about processes, expected timelines, and relevant policies. On the analytical side, it generates recommendations about whether a request is a new product, an upgrade, or already covered by an existing enterprise standard. It produces risk and readiness scores that summarize vulnerabilities, lifecycle status, duplication concerns, and cost implications. The system also generates structured artifacts such as intake forms, change analysis documents, evaluation plans, and procurement justifications that can flow directly into existing approval or tracking systems. In operations, it outputs monitoring dashboards and alerts that highlight unused licenses, upcoming end-of-life dates, or pending upgrades. Each of these outputs is designed to be human-reviewable, with citations and evidence provided to support trust and transparency. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-86 | Form 4546 IDR Question-Answering Automation | Pre-Deployment - The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | Use of Large Language Models (LLMs) to automatically assess whether taxpayers have addressed the questions raised by Large Business and International (LB&I) Revenue Agents in Information Document Requests (IDRs) responses. | This will assist the Revenue Agent by processing the received taxpayer correspondence to speed up disposition. | Document that details if the IDR has been answered and if it is answered then generate a summary. | Document that details if the IDR has been answered and if it is answered then generate a summary. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-9 | Vendor Risk Analytics | Pilot - The use case has been deployed in a limited test or pilot capacity. | Procurement & Financial Management | Pilot | Presumed high-impact but determined not high-impact | Not high-impact | AI makes recommendations only and any contracting decisions would go through multiple layers of review by agency officials. | Classical/Predictive Machine Learning | Research on vendor risk analytic methods using supervised learning to assess contractor responsibility and whether a prospective vendor would perform successfully. AI can predict potential vendor inability to perform successfully. This capability has the potential to prevent costly, mission-impacting contracting issues. Seminal research and model training in this area was conducted by IRS and Navy personnel who published this paper (https://calhoun.nps.edu/handle/10945/62901). Subsequent, independent federally funded and academic research studies have also found that AI has the potential to improve contractor source selection decision making. | AI assesses the likelihood of successful contractor performance and identifies vendors at heightened risk of poor performance or non-compliance. | Vendor risk assessments output in spreadsheet or dashboard data visualization formats. | In-house | No | Vendor risk assessments output in spreadsheet or dashboard data visualization formats. | Contractor entity and contract transaction data from SAM.gov and USASpending.gov was used to train the models. Contractor entity data is provided by vendors during the contractor registration process. Contract transaction data is reported by federal agency Contracting Officers. | https://catalog.data.gov/dataset/usaspending-gov-federal-award-subaward-and-account-data | No | None of the above | Yes | |||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-94 | AI for Schema Development - Online Adaptive Forms | Pre-Deployment - The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | The Generative AI Tool will assist our project team in developing schemas on all non-tax, tax-related or tax forms they are converting to mobile/online adaptive versions. The use of these generative AI tools will reduce time spent on this task, freeing up our authors to focus on the form authoring process. | The Online Adaptive Forms team is facing significant resource constraints due to budget cuts and limited authoring staff. To meet the IRS’s need to onboard the ~100 non-tax forms in the backlog for online adaptive forms, the project team is leveraging Generative AI to streamline and accelerate the form digitalization process. The opportunities for this use case include removing the need for employees to manually create schemas for our forms, reducing the number of hours spent manually creating schemas and authoring forms, and assisting form authors by generating a first draft version of the mobile adaptive form. | This Generative AI tool-powered approach has enabled the Online Adaptive Forms team to reduce the time to deliver initial form drafts by 8 weeks, putting the team in a strong position to meet the IRS’s needs and the 21st Century IDEA Act. This innovative use of technology not only solves a business challenge, but also lays the foundation for a more automated, scalable forms conversion process moving forward. We have tested two AI products to provide outputs of the Extensible Markup Language (XML) or JavaScript Object Notation (JSON) schemas that has been designed to work with the adaptive forms process. | This Generative AI tool-powered approach has enabled the Online Adaptive Forms team to reduce the time to deliver initial form drafts by 8 weeks, putting the team in a strong position to meet the IRS’s needs and the 21st Century IDEA Act. This innovative use of technology not only solves a business challenge, but also lays the foundation for a more automated, scalable forms conversion process moving forward. We have tested two AI products to provide outputs of the Extensible Markup Language (XML) or JavaScript Object Notation (JSON) schemas that has been designed to work with the adaptive forms process. | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-95 | Using AI to analyze Taxpayer Burden Survey open-ended responses | Pre-Deployment - The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Natural Language Processing (NLP) | Large amounts of qualitative data need to be coded to convert to quantitative data for analysis. | Reduced level of effort and objective summarization of qualitative data; Complete data from surveys | Natural language summaries of open-ended survey response | Natural language summaries of open-ended survey response | ||||||||||||||||||||
| Department Of The Treasury | Internal Revenue Service (IRS) | TREAS-IRS-96 | AI Assisted Business Process Mapping | Pre-Deployment - The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | The output is not presumed to be high-impact and is not used as the principal basis for significant decisions/actions | Generative AI | This project utilizes Large Language Model (LLM) to produce a library of IRS business process maps and optionally integrate them into knowledge graphs based on Internal Revenue Manual (IRM), the IRS processing codes (Document 6209) and other pertinent documents. In addition, a custom tool will assist users with prompt engineering to drill down on a specific business area with additional desktop guides and standard operating procedures (SOPs). | This is for the IRS internal use to drive efficiency by mapping processes and helping identifying bottlenecks along the way. The library of the business process map will be a good companion to IRM, Document 6209 and valuable resources to the IRS employees. | Business process maps in standard BPMN (Business Process Model and Notation) or knowledge graph. | Business process maps in standard BPMN (Business Process Model and Notation) or knowledge graph. | ||||||||||||||||||||
| Department Of The Treasury | Technology Common Services Center (TCSC) | TREASURY-OCIO-001 | GenAI Productivity Tool | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | Not high-impact | Not high-impact | The AI system's outcome does not affect civil rights, safety, or access to critical services, therefore, it doesn't meet the criteria for a high-impact AI use case. This system is provided to bureaus and offices across treasury that may chose to implement high impact use capabilities, however, as the system provider we are unable to indicate this classification. | Generative AI | General research and smart assistant functions | Improves organizational efficiency by automating communication, documentation and research tasks. | Generates contextually relevant, coherent and human-like text responses that address user quires, generate content and support decision making across a wide range of business and operational needs. It provides responses to general research questions; provides cites and sources; drafts and reviews text. | 05/07/2025 | Purchased from a vendor | No | Generates contextually relevant, coherent and human-like text responses that address user quires, generate content and support decision making across a wide range of business and operational needs. It provides responses to general research questions; provides cites and sources; drafts and reviews text. | The model was trained and evaluated on a mixture of licensed data, data created by human trainers, and publicly available data. For Treasury enterprise use, no agency-specific or Treasury data was used to train, fine-tune, or evaluate the model unless such data was explicitly provided and securely isolated within that enterprise environment. | Yes | None of the Above | No | Yes | Foreseeable impacts include privacy risks such as potential data exposure or misuse, civil rights concerns stemming from algorithmic bias or unequal access, and civil liberties implications related to automated decision-making or service disruption. These risks were identified through internal AI risk reviews, compliance assessments against NIST and OMB guidance, and controlled scenario-based simulations conducted during pilot testing. | In-progress | Development of monitoring protocols is in-progress | Yes, sufficient and periodic training has been established | Yes | Establishment of an appropriate appeal process is in-progress | Direct usability testing | |||||
| Department Of The Treasury | Technology Common Services Center (TCSC) | TREASURY-OCIO-003 | GenAI Procurement Tool | Pre-Deployment - The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | Not high-impact | Not high-impact | This AI use case is not considered high-impact for a U.S. government agency. The AI system’s functionality supports acquisition planning, bid analysis, and workflow efficiency improvements. Its outcomes do not directly affect civil rights, individual safety, or access to critical government services. The system operates as a decision-support tool that aids contracting officers and analysts in administrative and analytical tasks, rather than making autonomous determinations impacting individuals or public welfare. Therefore, it does not meet the criteria for a high-impact AI use case. | Classical/Predictive Machine Learning | Streamlines government procurement and contract management by automating compliance checks, bid analysis, and acquisition workflows. It directly supports federal acquisition compliance by validating documentation against procurement rules (such as FAR and DFARS), ensuring audit-readiness, and maintaining transparent, traceable records of contract decisions—minimizing risks of non-compliance in acquisition processes. | Improves Procurement Efficiency & Accountability — supports Treasury’s goal of ensuring responsible use of federal funds and maintaining integrity in acquisition and financial operations. Simplifies procurement and contract management for Treasury acquisition teams by automating bid review, data analysis, and documentation. It helps staff make faster, data-driven decisions while increasing transparency and reducing administrative burden. | Produces structured procurement intelligence outputs such as bid evaluation matrices, acquisition performance dashboards, contract summaries, and compliance validation reports. It automatically flags discrepancies in bids, visualizes procurement timelines, and recommends optimal vendor selections based on cost, compliance, and performance history—enabling more transparent and defensible acquisition decisions. | Purchased from a vendor | No | Produces structured procurement intelligence outputs such as bid evaluation matrices, acquisition performance dashboards, contract summaries, and compliance validation reports. It automatically flags discrepancies in bids, visualizes procurement timelines, and recommends optimal vendor selections based on cost, compliance, and performance history—enabling more transparent and defensible acquisition decisions. | The model was trained and evaluated on a combination of licensed datasets, data created by human subject matter experts, and publicly available acquisition and procurement data. For Treasury enterprise use, the model was fine-tuned and validated on labeled internal acquisition documents, solicitation records, and agency correspondence that are classified as internal data. | No | None of the Above | Yes | In-progress | Foreseeable impacts include privacy risks such as potential data misuse, fairness concerns like algorithmic bias in procurement evaluations, and operational issues such as reduced human oversight in decision-making. These impacts were identified through internal security and ethical risk assessments, best practice reviews aligned with federal AI guidance, and detailed analysis of vendor documentation to verify compliance with responsible AI standards and safeguard against unintended consequences. | In-progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | In-progress | Establishment of an appropriate appeal process is in-progress | Direct usability testing | ||||||
| Department Of The Treasury | Office of Financial Research (OFR) | TREASURY-OFR-001 | ChatOFR AI Chatbot | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | Not high-impact | Not high-impact | Internal tool for OFR staff only air-gapped system with no external impact on public services or rights | Generative AI | In-house AI platform providing OFR staff with access to multiple large language models for question-answering guidance, text generation and summary, and code development assistance | Improves operational efficiency by providing staff with quick access to internal guidance and draft text generation capabilities reducing time spent searching for information and creating initial document drafts | Responses to queries using multiple LLM models provided by vendor. Includes access to specialized knowledge bases. Provides IT and Research staff with code generation, debugging, and optimization outputs. Provides user interface to create API tokens for programmatic usage. | Developed in-house | Yes | Responses to queries using multiple LLM models provided by vendor. Includes access to specialized knowledge bases. Provides IT and Research staff with code generation, debugging, and optimization outputs. Provides user interface to create API tokens for programmatic usage. | Internal OFR governance documents from SecOps PDF documentation OFR Document Library. Uses RAG on organization-specific information within OFR network. | No | Yes | |||||||||||||||
| Department Of The Treasury | Office of Financial Research (OFR) | TREASURY-OFR-002 | Programmatic Access to LLMs via API | Deployed - The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | Not high-impact | Not high-impact | Secure endpoints for internal researchers only no direct public impact or rights-affecting decisions | Classical/Predictive Machine Learning | Secure endpoints enabling researchers to submit prompts and receive model output directly within analytical workflows | Streamlines research processes by integrating AI capabilities directly into analytical workflows reducing manual data processing and improving research efficiency | Model outputs delivered directly to researchers analytical workflows in response to programmatic queries | 09/11/2025 | Developed in-house | Yes | Model outputs delivered directly to researchers analytical workflows in response to programmatic queries | Research datasets and analytical data used within OFR's research workflows | No | Yes | ||||||||||||||
| Department Of The Treasury | Office of Financial Research (OFR) | TREASURY-OFR-003 | NCCBR Data Validation Pilot | Pilot - The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | Not high-impact | Not high-impact | Data quality validation tool that flags errors for human review no automated decisions affecting public services or individual rights | Classical/Predictive Machine Learning | AI modules applied to our collection pipelines to flag ingestion errors and assess data quality before analyst review | Improves data quality and reduces manual effort in identifying data ingestion errors enabling analysts to focus on higher-value tasks | Flags indicating potential data ingestion errors and data quality assessments for analyst review | 06/01/2025 | Developed in-house | Yes | Flags indicating potential data ingestion errors and data quality assessments for analyst review | NCCBR collection pipeline data financial data ingested from various sources | No | Yes | ||||||||||||||
| Department Of The Treasury | Office of Financial Research (OFR) | TREASURY-OFR-005 | Automated Document Processing | Pre-Deployment - The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | Document conversion tool for internal research use no public-facing decisions or rights-impacting outcomes | Computer Vision | Converting legacy PDFs and chart images into structured datasets for faster research ingestion | Accelerates research by automating the conversion of unstructured documents into analyzable data formats reducing manual data entry and improving data accessibility | Structured datasets extracted from PDFs and images including tables charts and text data in machine-readable formats | Structured datasets extracted from PDFs and images including tables charts and text data in machine-readable formats | ||||||||||||||||||||
| Department Of The Treasury | Office of Financial Research (OFR) | TREASURY-OFR-006 | Advanced Anomaly & Changepoint Detection | Pre-Deployment - The use case is in a development or acquisition status. | Science | Pre-deployment | Not high-impact | Not high-impact | Analytical tool for market monitoring provides signals for human analysts to review no automated decision-making affecting individuals | Classical/Predictive Machine Learning | Scaling our topic-attention analytics to signal market shifts in real time | Enhances market surveillance capabilities by providing real-time detection of significant market changes enabling more timely analysis and response to financial market developments | Real-time alerts and visualizations of detected anomalies and changepoints in market attention patterns with confidence scores and contextual information | Real-time alerts and visualizations of detected anomalies and changepoints in market attention patterns with confidence scores and contextual information | ||||||||||||||||||||
| Department Of The Treasury | Office of Financial Research (OFR) | TREASURY-OFR-008 | Procurement Intelligence | Pre-Deployment - The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | Not high-impact | Not high-impact | Internal procurement analysis tool for market monitoring and supplier identification no direct impact on external parties | Classical/Predictive Machine Learning | AI-powered market monitoring to identify competitive suppliers single-source risks and cost-optimization opportunities | Improves procurement efficiency and reduces costs by identifying optimal suppliers mitigating single-source risks and highlighting cost-saving opportunities | Reports identifying competitive suppliers risk assessments for single-source dependencies and recommendations for cost optimization | Reports identifying competitive suppliers risk assessments for single-source dependencies and recommendations for cost optimization | ||||||||||||||||||||
| Department Of The Treasury | Office of Financial Research (OFR) | TREASURY-OFR-009 | Operational Support Bots | Pre-Deployment - The use case is in a development or acquisition status. | Information Technology | Pre-deployment | Not high-impact | Not high-impact | Internal IT support tool for OFR staff no external user impact or rights-affecting decisions | Generative AI | Chatbots for IT service-desk tasks ticket triage password resets and resource provisioning to free analysts for higher-value work | Reduces IT support workload by automating routine service desk tasks improving response times for common requests and allowing IT staff to focus on complex issues | Automated responses to IT support queries ticket categorization and routing password reset confirmations and resource provisioning status updates | Automated responses to IT support queries ticket categorization and routing password reset confirmations and resource provisioning status updates | ||||||||||||||||||||
| Department Of Transportation | CAIO, NETT | DOT-1000001 | USDOT Compliance Plan for OMB Memorandum M-24-10 (September 2024) | Deployed | Administrative Functions | Deployed | Not high-impact | Not high-impact | Other | This Compliance Plan conveys DOT's approach to achieving consistency with OMB Memorandum M-24-10 Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. The plan aligns with M-24-10's three main pillars of Strength | Provides the Department with a roadmap for accelerating the responsible use of AI and complying with OMB directives. | No | No | Not Applicable | ||||||||||||||||||||
| Department Of Transportation | CAIO, Volpe, OST-R | DOT-1000002 | Advanced Research and Testing (ART) Network | Pre-deployment | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | Agentic AI | The ART Network is the shared service environment for AI research and development activities that provides researchers with access to a rapid AI innovation, exploration, development, and sharing platform using secure and approved IT infrastructure. | The ART Network shared service accelerates AI-enabled research at the Department by providing researchers with direct access to an established platform that contains all available AI tools for that environment rather than making changes to an existing env | No | No | Not Applicable | ||||||||||||||||||||
| Department Of Transportation | CAIO | DOT-1000003 | AI Operations Laboratory (OPSLAB) | Pre-deployment | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | Agentic AI | Provides operational AI use case developers with access to shared environments with all OCIO-cleared AI functionality for use case experimentation, development, and initial data and model risk management identification and mitigation. | The OPSLAB shared service accelerates AI-enabled operational use case development at the Department by providing AI developers with direct access to an established platform that contains all available AI tools for that environment rather than making chan | No | No | Not Applicable | ||||||||||||||||||||
| Department Of Transportation | CAIO | DOT-1000004 | Transportation AI-enabled Network (TrAIN) | Pre-deployment | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | Agentic AI | The TrAIN is the set of shared service environments for operational AI use cases that provides AI developers with a pre-established, secure AI-enabled environments for development, testing and deployment under the Chief AI Officer's compliance and risk ma | The TrAIN shared service accelerates AI-enabled operational use case deployment at the Department by providing AI developers with direct access to an established platform that contains all available AI tools for that environment rather than making change | No | No | Not Applicable | ||||||||||||||||||||
| Department Of Transportation | OIE, CAIO, OST-R, OCIO | DOT-1000005 | AI Support and Collaboration Center (AISCC) | Pre-deployment | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | Other | The AISCC provides all DOT employees and contractors with a website for getting educated on AI topics, get inspired by DOT and external use cases and capabilities, collaborate with internal and external AI communities of practice, partner with AI subject | Provides the Department's north star for the delivery of AI education, inspiration, and direction to the workforce. | 10/09/2024 | No | No | Not Applicable | |||||||||||||||||||
| Department Of Transportation | CAIO, HASS, Volpe | DOT-1000006 | Enterprise Personal Productivity Assistant Capability | Deployed | Administrative Functions | Deployed | Not high-impact | Not high-impact | Generative AI | Secure enterprise AI-enabled document generative AI capabilities. | Research and operating administrations are able to use application program interfaces (APIs) to develop employee productivity enhancing solutions for querying user-provided documents. | No | No | Not Applicable | ||||||||||||||||||||
| Department Of Transportation | CAIO, HASS, Volpe | DOT-1000007 | Enterprise "Ask Dottie" ChatBot Capability | Pre-deployment | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | Natural Language Processing (NLP) | Secure enterprise AI-enabled question and answer ChatBot capabilities for natural language querying of documents and data. | Provides the Department with secure enterprise-wide LLM ChatBot capabilities that can be tailored to specific use cases. | No | No | Not Applicable | ||||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000008 | Certified Professional Controller (CPC) On-Board Success Evaluator | Deployed | Transportation | Deployed | Not high-impact | Not high-impact | Classical/Predictive Machine Learning | CPC Outlook Generator (COG) is a front end Graphical User Interface built to interact with a data model called CPC Onboarding Success Evaluator (COSE), a machine learning model capable of predicting how many Certified Professional Controller (CPCs) the FA | Based on what-if scenario, the system provides a graphical representation of a 5 or 10 year CPC staffing outlook (which is the predictive component provided by the ML). Alongside with the cost (training and salary) the "when" the target shifts (is it ear | 02/10/2023 | No | Python-code algorithms are applied to multiple data sources owned by the agency such as Staffing Work Book, Federal Personnel Payroll Sytem, National Training Database, Controller workforce plan, etc. to train, fine-tune, and evaluate performance of the | Yes | No | ||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000009 | Automated Delay detection using voice processing | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Classical/Predictive Machine Learning | AI-based voice recognition of Air Traffic Control (ATC) and pilot communications associated with a flight is required for enhanced and accurate delay reporting and attribution. Many delay events, such as vectoring, are not currently reported/detected/acc | The output of the AI system is data that captures a delay and the associated cause. This will be combined with other data sources to provide a more accurate representation of a flight. | 02/10/2023 | No | The AI-based voice recognition process is trained using audio recording of ATC to pilot communications. | No | No | ||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000010 | Technical Operations Predictive Maintenance | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Classical/Predictive Machine Learning | "Utilize equipment telemetry data, statistical modeling and Machine Learning to predict equipment failures before they occur. This will improve operational efficiency and safety by reducing unscheduled outages and/or shortening outage times as replacemen | Utilize equipment telemetry data, statistical modeling and Machine Learning to predict equipment failures before they occur. This will improve operational efficiency and safety by reducing unscheduled outages and/or shortening outage times as replacement | No | Utilize equipment telemetry data, statistical modeling and Machine Learning to predict equipment failures before they occur. This will improve operational efficiency and safety by reducing unscheduled outages and/or shortening outage times as replacement | No | No | |||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000011 | Airborne Safety Metric (ASM) | Deployed | Transportation | Deployed | Not high-impact | Not high-impact | Other | The use of AI modeling can categorize the types of events based on the information provided in the accident and incident report and its result will support the safety metric calculation. | The AI system can classify the event types such as mid-air collisions, flight into terrain accidents, and turbulence events for a given accident or incident reports. | 10/01/2019 | No | ASM ingests accident and incident report such as Mandatory Occurrence Reports (MOR) and Aviation Safety Information Analysis & Sharing (ASIAS) data as the data source for the safety metrics calculation | No | No | ||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000012 | Surface Safety Metric (SSM) | Deployed | Transportation | Deployed | Not high-impact | Not high-impact | Other | Automated review of accident narratives. | Classification of aircraft accidents as a - runway collision - taxiway collision - runway excursion - not a surface event | 10/01/2018 | No | Internal and external accident report narratives. | No | No | ||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000013 | Surface Report Classifiier (SCM/Auto-Class) | Deployed | Transportation | Deployed | Not high-impact | Not high-impact | Other | To classify surface Mandatory Occurrence Reports (MORs) to reduce the amount of manual event review. | Classes - runway excursion, runway incursion, surface incident. | 01/01/2019 | No | Manually curated dataset for surface events | No | No | ||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000014 | Pilot-Controller Voice to Text Transcription | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | To transcribe recorded radio communications between pilots and air traffic controllers operating within the national airspace. | Text transcriptions and speaker role (pilot/controller) | No | 1000 hour voice corpus | No | No | |||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000015 | Detection of Unmanned Aircraft Systems (UAS) Encounter | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | Unmanned Aircraft Systems (UAS) are a source of potential safety risk. Manual data exploration by a human is labor and time consuming. Automatic data mining can help to identify safety issues. | Binary classification (yes/no) label for UAS event detection | 05/10/2022 | No | Manually labeled training set | No | No | ||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000016 | Human Performance Taxonomy labeling | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | to label radio communication transcripts according to the HP Taxonomy labeling system. | One of 400 HP taxonomy labels. | No | SME-labeled training set | No | No | |||||||||||||||||||
| Department Of Transportation | FAA AVS | DOT-1000021 | Regulatory Compliance Mapping Tool | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | The AVS International office is required to identify means of compliance to ICAO Standards and Recommended Practices (SARPs). Both SARPs and means of compliance evidence are text paragraphs scattered across thousands of pages of documents. AOV identifie | No | No | No | |||||||||||||||||||||
| Department Of Transportation | FAA AVS | DOT-1000022 | JASC Code classification in Safety Difficulty Reports (SDR) | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | AVS identified a need to derive the joint aircraft system codes (JASC) chapter codes from the narrative description within service difficulty reports (SDR), a form of safety event reporting from aircraft operators. A team of graduate students at George Ma | No | No | No | |||||||||||||||||||||
| Department Of Transportation | FAA ANG | DOT-1000024 | Offshore Precipitation Capability (OPC) | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | The AI/ML tool helps detect and predict where precipitation currently is and where it will be at the future at various altitudes in parts of the NAS that do not have traditional ground based weather radar. Benefits are better weather dectection and avoid | The output of the system is meant to simulate weather radar where cooler colors (blues and greens) correspond to light precip where hotter colors (yellow, oranges and red) indicate heavy precip and/or more intense precipitation. A similar process is down | 12/01/2013 | No | None. It's public data from NOAA/NWS. | No | No | ||||||||||||||||||
| Department Of Transportation | FAA AVS | DOT-1000025 | Course Deviation Identification for Multiple Airport Route Separation (MARS) | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | The Multiple Airport Route Separation (MARS) program is developing a safety case for reduced separation standards between Performance Based Navigation (PBN) routes in terminal airspace. These new standards may enable deconfliction of airports in high-dema | No | No | No | |||||||||||||||||||||
| Department Of Transportation | FAA AVS | DOT-1000026 | Improve and Speed up the Certification Service Oversight Process Using Intelligent Document Review | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | Before digitization: * The Certification Services Oversight Process (CSOP) is a manually intensive paper process * Requires the public and small businesses to provide input for Certification requests * No external applicant tracking which results in p | "Before digitization: * The Certification Services Oversight Process (CSOP) is a manually intensive paper process * Requires the public and small businesses to provide input for Certification requests * No external applicant tracking which results in | 07/01/2024 | No | Dynamic Regulatory System | Yes | No | ||||||||||||||||||
| Department Of Transportation | FAA ANG | DOT-1000027 | Determining Surface Winds with Machine Learning Software | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | Successfully demonstrated use of an AI capability to analyze camera images of a wind sock to produce highly accurate surface wind speed and direction information in remote areas that don’t have a weather observing sensor. | No | No | No | |||||||||||||||||||||
| Department Of Transportation | FAA ANG | DOT-1000028 | Remote Oceanic Meteorological Information Operations (ROMIO) | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | ROMIO is an operational demonstration to evaluate the feasibility to uplink convective weather information to aircraft operating over the ocean and remote regions. Capability converted weather satellite data, lightning and weather prediction model data in | No | No | No | |||||||||||||||||||||
| Department Of Transportation | NHTSA | DOT-1000030 | Head Kinematics Prediction | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | Description: Utilize deep learning models for predicting head kinematics directly from crash videos. The utilization of deep learning techniques enables the extraction of 3D kinematics from 2D views, offering a viable alternative for calculating head kine | Angular velocity - injury prediction | No | No | No | ||||||||||||||||||||
| Department Of Transportation | NHTSA | DOT-1000031 | Crash Parameter Prediction | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | Description: Utilize deep learning for predicting crash parameters, Delta-V (change in velocity) and PDOF (principal direction of force), directly from real-world crash images. Delta-V and PDOF are two most important parameters affecting injury outcome. D | Delta-V & PDOF | No | No | No | ||||||||||||||||||||
| Department Of Transportation | FRA | DOT-1000033 | Crushed Aggregate Gradation Evaluation System | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | Description: Deep learning computer vision algorithms aimed at analyzing aggregate particle size grading. Input: Images of ballast cross sections | Ballast fouling index | No | No | No | ||||||||||||||||||||
| Department Of Transportation | FRA | DOT-1000034 | Automatic Track Change Detection Demonstration and Analysis | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | Description: DeepCNet-based neural network to identify and classify track-related features (e.g., track components, such as fasteners and ties) for ""change detection"" applications. Input: Line-scan images from rail-bound inspection systems. | Notification of changes from status quo or between different inspections based on geolocation. | No | No | No | ||||||||||||||||||||
| Department Of Transportation | FRA | DOT-1000035 | Predictive Analytics Using Autonomous Track Geometry Measurement System (ATGMS) Data | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Description: Leveraging large volumes of these recursive track geometry measurements to develop and implement automated machine-learning-based processes for analyzing, predicting, and reporting track locations of concern, including those with significant | Inspection report that includes the trending of track geometry measures and time to failure (i.e., maintenance and safety limits). | No | No | No | ||||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000036 | Integration of small Unmanned Aircraft System (sUAS) Geospacial Information System (GIS) Technologie | Deployed | Transportation | Deployed | Not high-impact | Not high-impact | Other | This use case will be used to enable new methods and tools that support the maintenance of life cycle of National Airpsace System (NAS) infrastructure. | Used to enable non-Georefereced data to be integrated into digital form. | No | Esri ArcGIS and associated tools and applications | No | No | |||||||||||||||||||
| Department Of Transportation | FHWA | DOT-1000038 | Geolocating and Identifying Vehicle Hard Brake, Acceleration and Seat Belt Usage with CV Data | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | Description: The ongoing work leverages AI/ML in big traffic data analytics to identify roadway geolocations where potential safety issue may exist through the integration of connected vehicle data, speed and Individual Vehicle Record data, historical cr | Maps and Reports | No | No | No | ||||||||||||||||||||
| Department Of Transportation | FHWA Turner-Fairbank | DOT-1000039 | Path to Advanced Novel Data Analytics (PANDA), a data science lab. | Pre-deployment | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Description: A data science lab established at Turner-Fairbank Highway Research Center. | Promote usage of AI/ML tools on Databricks platform and implement research use cases across multiple highway desciplines. | No | No | Not Applicable | ||||||||||||||||||||
| Department Of Transportation | FHWA | DOT-1000040 | Exploratory Advanced Research group of projects | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | Description: The Exploratory Advanced Research (EAR) Program addresses the need for longer term, higher risk research with the potential for transformative improvements to transportation systems. The EAR Program currently funds several projects that focus | Inputs and outputs vary across the projects. | No | No | Not Applicable | ||||||||||||||||||||
| Department Of Transportation | CAIO, FAA, OST-R, Volpe, OCIO | DOT-1000041 | Enterprise Knowledge Graph and Advanced AI Data Structures Collaboration Group | Pre-deployment | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | Other | DOT-Private Sector collaboration to develop advanced AI data structure expertise, lessons learned, and best practices to accelerate AI development across the agency. | High-quality and efficient data structures increase AI model efficiency and accuracy. | No | No | Not Applicable | ||||||||||||||||||||
| Department Of Transportation | CAIO | DOT-1000042 | Transportation Use Case Knowledge Repository (TrUCKR) | Pre-deployment | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | Other | TrUCKR is the CAIO managed DOT platform for tracking the Department's unclassified AI use case development, maturity, assessments, clearances, risk evaluations and mitigations, and authorities to operate across the use case lifecycle for operations, resea | TrUCKR is the Department's data source for prioritizing and driving the acceleration of AI, monitoring AI governance and compliance, and internal and external collaboration. | No | No | Not Applicable | ||||||||||||||||||||
| Department Of Transportation | CAIO, NETT | DOT-1000043 | NETT Council AI Coordination and Activities (AICA) Working Group | Pre-deployment | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | Other | The AICA Working Group consists of Operating Administration (OA) and Secretarial Offices AI leaders and subject matter experts across the Department. It is chaired by the CAIO and vice-chaired by representatives from the DOT Office of Research and Techno | Provides DOT with the necessary administrative infrastructure for coordinating Executive Order and other federal guidance and mandates, tracking AI activities, and collaborating on AI initiatives and compliance, governance, and guidance documents. | No | No | Not Applicable | ||||||||||||||||||||
| Department Of Transportation | CAIO, OST-M, OCIO, OIE | DOT-1000044 | DOT Workforce Acquisitions and Training Team | Pre-deployment | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | Other | The Workforce Acquisitions and Training Team members are DOT OA and Secretarial Office leaders responsible for AI talent acquisition and training and inspiring the workforce to accelerate AI adoption. | Provides DOT with coordination and collaboration on AI hiring and training initiatives. | 12/11/2024 | No | No | Not Applicable | |||||||||||||||||||
| Department Of Transportation | CAIO, OCIO, OST-P, OST-M, NETT | DOT-1000045 | NETT Council Safety, Rights, and Security Review (SR2) Committee | Pre-deployment | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | Other | The SR2 Committee reviews and approves the operational deployment of all AI use cases and the public sharing of use case data, models, and code. The SR2 Committee is also responsible for performing the Security Review required by Executive Order 14110 Sec | Provides the Department with expert oversight of AI use case safety, security, rights, privacy, and data. | No | No | Not Applicable | ||||||||||||||||||||
| Department Of Transportation | CAIO, OST-M, FHWA, OCIO | DOT-1000046 | DOT AI Procurement Team | Pre-deployment | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | Other | The Procurement Team members are DOT OA and Secretarial Office leaders responsible for AI procurement and acquisitions. | Provides DOT with internal and external coordination and collaboration on AI procurement and acquisition policy and procedures. | No | No | Not Applicable | ||||||||||||||||||||
| Department Of Transportation | CAIO, OST-R, OCIO | DOT-1000047 | DOT AI Emerging Technology Team | Pre-deployment | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | Other | The AI Emerging Technology Team members are DOT OA and Secretarial Office leaders most knowledgeable in leading-edge AI technology, capacity, and capabilities. | Provides DOT with internal and external coordination and collaboration on establishing and maintaining leading-edge AI technology, capacity, and capabilities. | No | No | Not Applicable | ||||||||||||||||||||
| Department Of Transportation | NHTSA | DOT-1000048 | NHTSA Interim Generative AI (GenAI) Usage Guidance | Pre-deployment | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | Generative AI | Guidelines on usage of commercially available Generative Artificial Intelligence (GenAI) Tools and Services. | NHTSA guidelines (do's and don'ts) for using publicly available GenAI services while protecting security and privacy of DOT data. | No | No | Not Applicable | ||||||||||||||||||||
| Department Of Transportation | CAIO, OST-P, OCIO, OST-R | DOT-1000049 | DOT Generative AI (GenAI) Policy | Pre-deployment | Administrative Functions | Pre-deployment | Not high-impact | Not high-impact | Generative AI | Provide DOT-wide generative AI (GenAI) policy. | Establish guidelines and safeguards for GenAI use within the Department. | No | No | Not Applicable | ||||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000050 | Surface Safety Metric (DOT Key Performance Indicator) | Deployed | Transportation | Deployed | Not high-impact | Not high-impact | Other | Creating weighting scheme values based on safety data for measuring and monitoring safety risk in the National Air Space. The weights are used to generate safety indices for surface and airborne settings, for both commercial and non-commercial flights. | Safety indices for Commercial and Non-Commercial accident and incident categories with the SSM Safety Performance Target. | 10/01/2024 | No | Aviation Risk Identification and Assessment (ARIA), Aviation Safety Information Analysis & Sharing (ASIAS) data. | No | No | ||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000051 | Power Platform Use | Deployed | Administrative Functions | Deployed | Not high-impact | Not high-impact | Generative AI | Speeds up the development of code structure. | Provides programming language recommendations (C#) to support coding/software development. | 04/01/2025 | No | No | No | |||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000052 | Tech Ops LLM Document Search | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | The AI system designed for Tech-Ops personnel is a comprehensive support tool that enhances efficiency by providing rapid, contextually accurate answers drawn from a central Knowledge Library. This Knowledge Library contains Technical Instruction Manuals, | Direct Answers to Queries: When a technician inputs a question in plain language, the AI generates a concise, relevant answer drawn directly from the Technical Instruction Manuals, Maintenance Handbooks, or other materials in the Knowledge Library. This r | 05/01/2024 | No | Technical Instruction Manuals (TI) Maintenance Hadbooks (MHB) Training Course Material Historical Maintenance Logs | No | No | ||||||||||||||||||
| Department Of Transportation | FAA AHR | DOT-1000053 | Human Resources Policy Manual (HRPM) LLM Document Search | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Natural Language Processing (NLP) | To allow employees to quickly and easily (via natural language processing) ask Human Resource policy questions and quickly receive back an easy to understand response. | The system outputs HR policy in response to questions which are input by users (employees). | 04/01/2024 | No | No training and/or fine-tuning done by FAA. AHR has peformed internal testing to evaluate both the consistency and accuracy of reponses. | No | No | ||||||||||||||||||
| Department Of Transportation | FAA AFN | DOT-1000054 | Financials LLM Document Search | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Generative AI | Improving efficiency and productivity in budget and finance. | financial data retrieved from the text resources | 02/01/2024 | No | finacial reports and budget estimates | No | No | ||||||||||||||||||
| Department Of Transportation | FAA AVS | DOT-1000055 | Civil Aviation Registry Electronic Services (CARES) LLM Document Search | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Generative AI | CARES will use AI (Azure chatbot) to scan thousands of pages and generate an accurate answer. The benefits will be a faster and deeper understanding of the business requirements so the FAA can speed up our development. | The Azure chatbot will find the information requested and output the information in a clear, easy to understand answer. | No | Chabot utilizes OpenAI GPT models within the FAA security boundary via the FAA's subscription to Azure. OpenAI GPT models within the FAA's subscription on Azure do not retain user data and do not use user data to train their models. | No | No | |||||||||||||||||||
| Department Of Transportation | FAA AFN | DOT-1000056 | Financial Policy LLM Document Search | Deployed | Transportation | Deployed | Not high-impact | Not high-impact | Generative AI | The Financial Policy Chatbot (AI system) will search a variety of financial policy documents to answer users' financial policy questions, and more importantly, will link to the document(s) where it found the answer. The the chatbot will initially be used | The Financial Policy Chatbot (AI system) will search a variety of financial policy documents to answer users' financial policy questions, and more importantly, will link to the document(s) where it found the answer. | No | Financial Policy Chatbot input sources: - FAA Financial Manual - FAA Acquisition Management System - FAA Financial Policy SOPs - FAA Financial Policy Q&As - Dollars & Sense transcripts - FAA Financial Policy E-Learning course transcripts - FAA Financial P | No | No | |||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000057 | Spectrum Assignment & Engineering Team LLM Document Search | Deployed | Transportation | Deployed | Not high-impact | Not high-impact | Generative AI | Intended purpose: To make the user experience of end users more easy and interactive and engaging. Benefits: Increased customer satisfaction of FAA's coordination systems' users | Responses to help questions | 04/10/2024 | No | FAA owns the WebFCR system that requires coordination request for frequency spectrum be provided in particular formats as prescribed by the NTIA Spectrum Rebook manual. All data used is from the help file for WebFCR and the National Telecomunications and | No | No | ||||||||||||||||||
| Department Of Transportation | FAA AVS | DOT-1000058 | National Program Office LLM Document Search | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Generative AI | The intended purpose is to provide answers to Element Design Data collection Tool (ED DCT) questions used during COS (continued operational safety), IC (initial certification), and configuration changes of a certificate holder's operation. The benefits in | The outputs are aswers to Element Design Data collection Tool (EDDCT) questions. | No | FAA Safety Assurance System (SAS) data is used to evluate the performance. | No | No | |||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000059 | FAA Orders and associated supplemental change notices LLM Document Search | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Generative AI | The intended purpose of the AI chatbot is to streamline the retrieval of information from multiple FAA Orders and change notices. The expected benefits include saving time, improving accuracy in finding specific citations, and reducing manual effort for u | The chatbot outputs specific citations and relevant sections from multiple FAA Orders and change notices, which provides quick access to information based on user queries. It delivers clear and concise responses by searching through documents like JO 711 | No | Currently, we do not have access to an enterprise data catalog or agency-wide data repository. Although we attempted to contact those involved with the Dynamic Regulatory System (DRS) project, which is intended to be the central repository for FAA Orders, | No | No | |||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000060 | Remote Maintenance Monitoring (RMM) Analyzer Copilot | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | The intended purpose of the AI is to serve as a "Copilot" for technical operations, allowing users to interact with complex data in plain language. This Copilot will retrieve, interpret, and present relevant information from the Instrument Landing Systems | "Enhanced Accessibility: Users can pose questions in plain language, bypassing the need for technical know-how on data filtering or analysis, making data insights available to a broader range of personnel. Time Efficiency: By quickly retrieving relevant d | No | No | No | ||||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000061 | National Airspace System (NAS) Safety Anomaly Metric with STAD integration (Safety Trend Analytics D | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | To measure and monitor the degree of anomaly or deviation from a baseline based on safety data in the National Air Space to the Service Area, district, and facility level. | Day look ahead forecast of anomalous/non-anomalous conditions at levels from NAS-wide down to facilities. | 10/01/2024 | No | SysOps performance data, Meteorological Terminal Aviation Routine Weather Report (METAR), Remote Monitoring and Logging System (RMLS), limited staffing data | No | No | ||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000062 | Using Generative AI to Automatically Tag Narratives with Human Performace Common Taxonomy Factors | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Generative AI | Using genAI, automatically tag text narratives from Comprehensive Electronic Data Analysis and Reporting (CEDAR) Mandatory Occurrence Reports (MOR's) with factors from Human Performance Common Taxonomy (HP CT) at scale and with significant speed up. | HP CT factors related to an MOR's narrative. | 08/01/2024 | No | Human Performance Common Taxonomy Shared Application for Factor Evalution (SAFE) | No | No | ||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000063 | Generative AI Use Case from Inventory | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Generative AI | This use case will be used increase trust in new technology approach through this socialzation exercise. And increase buy-in and education of stakeholders with a useful product from their input. | Execute a use case chosen from a collection of candidates from Safety Trend Analytics Dashboard (STAD) stakeholders. | 10/01/2024 | No | No | No | |||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000064 | Human Performance (HP) Fatigue Recommendations | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | This use case will be used to develop and understanding of HP Fatigue recommendations and ties to safety risk. | To be determined | No | Administrator's fatigue recommendations (FY24). Safety data | Yes | No | |||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000065 | Prognosis and Diagnosis Based on Contributing Factors to Safety Risk | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | To increase expainability and interpretability of changes in safety risk based on contributing factors from and LLM tuned to Human Factors Common Taxonomy. | What are the Human Factors Common Taxonomy factors that are associated with a change in safety risk in the NAS. | No | Human Performance Common Taxonomy as collected with the Shared Application for Factor Evaluation (SAFE) tool | No | No | |||||||||||||||||||
| Department Of Transportation | FAA AJO | DOT-1000066 | Knowledge Representation for Deep Learning | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | The overall purpose of the effort is to establish Knowledge Graph usage to develop Deep Learning, starting with a specific problem statement. The problem statement is to "Enhance the management and resolution of deviation events by providing actionable in | 1. Diagnostic Insights - Event Analysis: Contributing factors to a deviation, with root cause identification based on historical and real-time data. - Priority Assessment: Highlight of critical issues requiring immediate attention, ranked by severity and | 06/03/2024 | No | - Historical Deviation Logs: Comprehensive records of past Code 80 and related sub-codes events, detailing the conditions, actions taken, and resolutions. These logs help train and evaluate the model's ability to identify patterns and predict future devia | No | No | ||||||||||||||||||
| Department Of Transportation | FAA AVS | DOT-1000067 | Identification of Similar Continued Operational Safety (COS) Events | Deployed | Transportation | Deployed | Not high-impact | Not high-impact | Other | AIR is utilizing AI within Palantir Foundry (i.e., Foundry's "Artificial Intelligence Platform" (AIP)) to search, find and link similar types of events (e.g., Continued Operational Safety (COS) event reports, Notices of Noncompliance (NCN), semantic searc | The output aggregates information from disparate sources to provide an integrated view of events that are similar. | 01/02/2024 | No | No | No | |||||||||||||||||||
| Department Of Transportation | FAA AVS | DOT-1000068 | Flight Standards Inspector Readiness Program (FSIRP) | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | This use case will help automate current manual processes and reduce tasks. | 01/02/2024 | No | Existing FSIRP database | Yes | No | |||||||||||||||||||
| Department Of Transportation | FAA AGC | DOT-1000069 | Court Reporting and Testimony Management Platform | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | This technology will provide solutions for remote, hybrid, and in-person proceedings that will include both transcription and translation services. | The FAA Office of General Counsel (AGC) personnel would receive detailed summaries for both rough and certified transcripts from a variety of proceeding depositions, arbitrations, expert interviews, meetings, and more. In addition, such a product would p | No | No | No | ||||||||||||||||||||
| Department Of Transportation | FAA AGC | DOT-1000070 | AI-Assisted Legal Research | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | The AI-Assisted Research will return a summarized overview or draft with detailed insights from top results, along with a list of key cases statutes and regulations. | The ability to type a question, get an answer, and have all the supporting resources right underneath that answer. Additionally, the AI-Assisted Research answer or draft generated is supported by case law that is already within the database. | No | No | No | ||||||||||||||||||||
| Department Of Transportation | FAA AVS | DOT-1000071 | Service Difficulty Reporting System (SDRS) Joint Aircraft System/Component Code (JASC) Code Picker | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | This use case will leverage AI to assist in sorting and assigning proper JASC codes within the SDRS environement as an addition into the next development cycle for SDRS. | Modified original input of JASC codes to ensure correct and consistent JASC Codes within the system | No | Current SDRS database available in the Enterprise Information Management Data Platform (EDP) | No | No | |||||||||||||||||||
| Department Of Transportation | FAA AGC | DOT-1000072 | Case and Document Management Copilot | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | The Copilot would be natively embedded across an application to deliver a consistent user experience that can answer questions, generate content, and dynamically automate any action. | The copilot includes a library of actions, which are pre-programmed capabilities that enable the copilot to not only answer questions using business data, but also string together workflows to get things done in support of users. | No | Legal and operational data in the Case and Document Management System or FAA intranet. | No | No | |||||||||||||||||||
| Department Of Transportation | FAA AFN | DOT-1000073 | AI Freedom of Information Act (FOIA) Request Classification | Pre-deployment | Transportation | Pre-deployment | Not high-impact | Not high-impact | Other | Will strengthen the Intake and Assignment Branch's precision of FOIA request assignments at the first take, freeing up time to actively engage with requesters and customers alike. Overall, these FOIA tools will allow for a more informed decision-making p | The output will provide guidance for the FOIA Xpress end user in determining the routing assigment to the approprioate LOB/Organization. | No | Order 1100.1C | No | No | |||||||||||||||||||
| Department Of Transportation | FAA APL | DOT-1000074 | Community Engagement Chat Bot | Deployed | Transportation | Deployed | Not high-impact | Not high-impact | Natural Language Processing (NLP) | Assisting the public in finding information on the FAA website. | We are using our AI Chat Bot to better inform the public. In our efforts to engage with the public we wanted to use this tool to direct the public to the tremendous amount of information that is on the FAA Website. | 06/08/2021 | No | We are constantly working to refine the Chat Bot responses and help it pull the best information that is available on the FAA Website to provide value to the public | No | No | ||||||||||||||||||
| Department Of Transportation | OST | DOT-1000084 | Public Comment Analyzer (PCA) | Deployed | Administrative Functions | Deployed | Not high-impact | Not high-impact | Agentic AI | The business goal of this project is to use a powerful language model to analyze public comments. The model will: • Categorize comments into different topics • Detect the sentiment (positive, negative, or neutral) of the comments • Generate summaries of the comments • Provide daily updates on the comments • Allow secure access to the information | Data is available on Regulations.gov. | No | No | |||||||||||||||||||||
| Department Of Transportation | OST | DOT-1000215 | Conversational Multi-modal Chatbot (Google Gemini) | Deployed | Administrative Functions | Deployed | Not high-impact | Not high-impact | Generative AI | DOT needs a large language model that can integrate with word processing, email, presentations and shared drives. | 10/01/2025 | No | ||||||||||||||||||||||
| Department Of Transportation | OST | DOT-1000216 | Report Sumarization (Google NotebookLM) | Deployed | Administrative Functions | Deployed | Not high-impact | Not high-impact | Generative AI | DOT employees need an environment to upload custum Retrieval-Augmented Generation (RAG). | 10/01/2025 | No | ||||||||||||||||||||||
| Department Of Transportation | OST | DOT-1000217 | USAi (multi-model large language models) | Pilot | Administrative Functions | Pilot | Not high-impact | Not high-impact | Generative AI | DOT needs access to multi-model environment that allows to experiment with and compare top AI models. | GSA | No | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-897 | Employee Transactions for HR Decision-Making | a) Pre-deployment – The use case is in a development or acquisition status. | Human Resources | Pre-deployment | a) High-impact | High-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | AI is intended to automate repeatable tasks, improve efficiency, measure employee performance, support onboarding, improve customer experience, improve user adoption, and improve analytics and reporting to further support decision-making and personalize experiences. | - Increase efficiency of Core Human Resources (HR) and Payroll - Enhance employee and veteran outcomes e.g., improve customer experience and improve user adoption | - Built-in user aids: Intelligent search, personalized recommendations, and proactive guidance. - Configuration and maintenance: Automated tasks, predictive analytics, and streamlined processes. AI Agents are embedded and can be enabled by customers to be ready to help across all areas of the HCM lifecycle: - Intelligent Recruiting: AI helps in creating job descriptions, identifying and ranking candidates, and generating interview questions. - Personalized Employee Experiences: Generative AI provides customized recommendations for career development and learning paths and can generate personalized learning content. - AI-Powered Employee Self-Service: AI chatbots answer employee questions and guide them through tasks. - Data-Driven Workforce Planning: Generative AI transforms workforce data into actionable insights for strategic planning, supports scenario simulations, and forecasts skill gaps. - Enhancing Employee Well-Being: AI-powered sentiment analysis assesses employee feedback to evaluate morale and potential risks and can generate personalized well-being programs. - Core HR and Payroll: AI is being integrated to improve the efficiency of these functions. | - Built-in user aids: Intelligent search, personalized recommendations, and proactive guidance. - Configuration and maintenance: Automated tasks, predictive analytics, and streamlined processes. AI Agents are embedded and can be enabled by customers to be ready to help across all areas of the HCM lifecycle: - Intelligent Recruiting: AI helps in creating job descriptions, identifying and ranking candidates, and generating interview questions. - Personalized Employee Experiences: Generative AI provides customized recommendations for career development and learning paths and can generate personalized learning content. - AI-Powered Employee Self-Service: AI chatbots answer employee questions and guide them through tasks. - Data-Driven Workforce Planning: Generative AI transforms workforce data into actionable insights for strategic planning, supports scenario simulations, and forecasts skill gaps. - Enhancing Employee Well-Being: AI-powered sentiment analysis assesses employee feedback to evaluate morale and potential risks and can generate personalized well-being programs. - Core HR and Payroll: AI is being integrated to improve the efficiency of these functions. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-6956 | Cyber Authorization Memo Generation | a) Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | VA cybersecurity staff are challenged with managing and staying updated with an extensive array of documentation related to policies, processes, directives, and frameworks (which are essential for maintaining compliance and ensuring the security and privacy of VA information systems). The VA Risk Management Framework (RMF) program continues to rely on human decision-making for risk-based decisions. This human-centric approach requires extensive involvement from various roles and generation/consumption of large amounts of artifacts and staffing resources. | This tool will significantly reduce the time required for memo generation, saving hours of work for each memo signed. By implementing AI in these use cases, EHRM aims to streamline processes, improve efficiency, and reduce the workload on both cyber personnel and administrative staff. | Compliance checklist results with policy references and recommendations. Automated authorization documentation. | Compliance checklist results with policy references and recommendations. Automated authorization documentation. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-7168 | Clinical AI Agent | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Administrative tasks, manual documentation, and complex workflows within the Electronic Health Record (EHR) cause lower clinical efficiency and operational effectiveness. | By implementing Clinical Al Agent (CAA), VA providers will leverage new technology to decrease time completing documentation and administrative tasks during a Veteran visit. This leads to increased end user satisfaction, reduced provider burnout, and ensures providers are spending time prioritizing Veteran care. | This agent will generate draft clinical notes based on the clinician-patient conversations. The generated notes can be reviewed, modified, and signed in both CAA and the Message Center. Clinical Al Agent is intended to streamline administrative tasks, reduce costs, and enhance patient care through close integration with the electronic health record (EHR). | This agent will generate draft clinical notes based on the clinician-patient conversations. The generated notes can be reviewed, modified, and signed in both CAA and the Message Center. Clinical Al Agent is intended to streamline administrative tasks, reduce costs, and enhance patient care through close integration with the electronic health record (EHR). | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2455 | Lexis+ | a) Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Improve the quality and efficiency of legal research, legal document review, and legal document preparation. The Lexis+ Document Analysis tool will improve efficiency and accuracy of legal services by giving OGC legal professionals the ability to have documents read and researched by the tool, reducing the time and legal skills required for legal research, legal writing, legal reviews, and trial preparation. The AI solution is preparing and synthesizing notes to be used by the legal teams; it is not preparing a conclusion or draft decision for legal teams. | - Better outcomes from litigation - Improve the timeliness and accuracy of legal advice | Legal memoranda, legal pleadings (e.g. motions and briefs), and legal reviews. | Legal memoranda, legal pleadings (e.g. motions and briefs), and legal reviews. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2758 | VA VoiceBot for Call Center Modernization | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | The VoiceBot can quickly handle common inquiries, offering Veterans immediate answers and reducing the frustration of long wait times. This allows Contact Centers to focus resources on more complex issues, ultimately improving response times for urgent matters and reducing overall call center costs. | The VA VoiceBot aims to enhance the Veteran experience by providing efficient self-service via telephone, reducing wait times, and freeing up call center resources for more complex issues. It will utilize advanced Natural Language Understanding (NLU) to manage routine inquiries and seamlessly escalate to human agents, when necessary, ensuring context and identity are retained. The initiative is focused on integrating VoiceBot across call center lines such as Women Veterans Call Center (WVCC), VBA, and VetHOME, which will streamline processes, improve support consistency, and lower operational costs. | Depends on call center. Output can include FAQ and authenticated information via APIs. | Depends on call center. Output can include FAQ and authenticated information via APIs. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-585 | Forescout: AI Behavioral Anomaly Detection w/ Automated Enforcement | a) Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Leverage AI/ML models to learn normal behavior patterns across endpoints, IoT, and OT devices. Identify zero-day threats, insider attacks, or lateral movement by compromised devices that are missed by point-in-time tools within the network. Use these patterns to detect anomalies in real time. When an anomaly (unusual data transfer, unauthorized protocol, lateral movement) is detected Forescout automatically enforces a response policy—such as quarantining the device, notifying SOC security teams, or triggering an orchestration workflow. | Benefits include real-time response to unknown threats, reduced time to detect and respond, eliminates manual correlation with siloed tools. | Response policy; quarantining the device, notifying SOC security teams, or triggering an orchestration workflow | Response policy; quarantining the device, notifying SOC security teams, or triggering an orchestration workflow | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1294 | VA AI Assist | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Reviewing information about a patient prior to an upcoming encounter is an important way in which clinicians prepare for an effective visit. Clinicians are challenged to effectively complete this review prior to encounters, given time constraints and the manual effort required to navigate electronic health record systems to seek the information. This results in a lower effectiveness of the visit and increased clinician burnout. | There is a significant opportunity for AI-powered search and summarization of clinical information to assist clinicians with efficiently and effectively reviewing a patient’s record prior to an encounter. | Outputs include summaries of clinical data from VistA. | Outputs include summaries of clinical data from VistA. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1877 | Service Period Validator (SPV) POC | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | Validate Veterans' service periods more accurately and quickly. This will ultimately save time and increase the quality of Veterans' decisions. | Increase efficiency of benefits delivery and provide an accurate and timely automation of benefit eligibility decisions (original claims) by proactively assessing and correcting data quality of Veteran service period. | A decision within the automated claims flow that validates the Department of Defense (DoD) service period data. | A decision within the automated claims flow that validates the Department of Defense (DoD) service period data. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1930 | Claims Processing Automation Quality Auditor POC | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | In an automation-driven claims process, this tool will act as an entry point for quality reviews: Perform claims quality audits to identify and request NQA reviews or VCE corrections based on automated claims processing anomalies/errors. Function of the model: Perform claims quality audits to identify and request NQA reviews or VCE corrections based on automated claims processing anomalies | Increased efficiency, improved benefit outcomes, faster decisions for Veterans. | Within that automation workflow, the claim will be offramped (based on the criteria of the claim and resulting automated decision) to a human to be worked. | Within that automation workflow, the claim will be offramped (based on the criteria of the claim and resulting automated decision) to a human to be worked. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3685 | CSOC AI Analyst Accelerator (Andesite AI) | a) Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | a) High-impact | High-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | VA Cybersecurity Operations Center (CSOC) requires new tools and capabilities to meet increasing demands for monitoring and securing the VA’s complex information technology (IT) environment, which currently involve time-consuming and manual processes. | Andesite AI has the potential to be a significant part of VA’s overall defensive security strategy which will provide comprehensive visibility by enabling the cybersecurity team to transform and accelerate investigations, connect data silos, and decrease time to response. Hosted in VAEC AWS, Andesite AI will connect via API to several endpoints such as Splunk, SOAR, ThreatQ and the VA Network to seamlessly integrate information to assist Cybersecurity Analysts with their investigations. | Andesite AI will submit prompts and retrieve results from AWS Bedrock (via Claude Sonnet) to help review potential incidents and perform investigations. The inputs of the model will include security event data from Splunk and Microsoft Sentinel used by CSOC analysts such as vulnerability data, indicators of compromise, security and event logs, and other threat intelligence data. Outputs of the model will include descriptions of potential incidents/vulnerabilities, suggestions on remediation and prioritization, and contextual information and correlation of various security event data. The users of Andesite AI will be 20 VA CSOC analysts. | Andesite AI will submit prompts and retrieve results from AWS Bedrock (via Claude Sonnet) to help review potential incidents and perform investigations. The inputs of the model will include security event data from Splunk and Microsoft Sentinel used by CSOC analysts such as vulnerability data, indicators of compromise, security and event logs, and other threat intelligence data. Outputs of the model will include descriptions of potential incidents/vulnerabilities, suggestions on remediation and prioritization, and contextual information and correlation of various security event data. The users of Andesite AI will be 20 VA CSOC analysts. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3767 | Exam Verification Agent | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | The process for a Veteran Service Representative (VSR) to determine if an exam is warranted for a claim for increase (claims with end product [EP] 020) for a given contention requires that the VSR expend considerable effort and time to review extensive documentation within a Veteran or Claimant's eFolder. VSR needs review this documentation to identify whether sufficient evidence already exists for the claim to proceed to a rating or if an exam should be ordered to be able to rate the claim. The AI will reduce the time needed to review the evidence by providing a list of related evidence to the contention and a recommendation on whether an exam is needed. | The Exam Verification Agent system will reduce the time needed to develop a claim, increasing efficiency in benefits processing. It transforms the complex and time-intensive process of determining medical exam necessity for disability claims for increase by deploying multiple specialized AI agents. The agents autonomously evaluate claim evidence and provide a recommendation and a summary of the reasoning for the recommendation to the user. | The product will provide VSRs with: - An agentic AI tool that identifies whether an exam is required for each contention for which there is a Claim for Increase - A summary of evidence identified within the file and links to the documents identified within the Veteran’s file. | The product will provide VSRs with: - An agentic AI tool that identifies whether an exam is required for each contention for which there is a Claim for Increase - A summary of evidence identified within the file and links to the documents identified within the Veteran’s file. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-4147 | Benefits Contention Classification Model | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The Benefits Contention Classification model is intended to improve the current benefits adjudication process by providing an automated classification for all benefits claims contentions. The model will do this by providing a classification for each contention that is not currently covered by the existing process. This is intended to make the overall benefits adjudication process more efficient by reducing the necessary time for VA rating representatives to choose the appropriate disability questionnaire for the the given contention. | The expected benefit of this model is increasing efficiency in the benefits adjudication process by automating the internal classification required to select the disability benefits questionnaire that should be sent to the Veteran. | The model outputs an internal classification (e.g., 3140) which is used to classify free text into a medical category. | The model outputs an internal classification (e.g., 3140) which is used to classify free text into a medical category. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-4465 | Limited Payability Letter Automation | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | The re-issuance of a benefits payment following Limited Payability currently relies on manual steps that take an undetermined amount of time. Using AI will allow VA to automate the re-issuance of benefits, accelerating the delivery of those re-issued payments to beneficiaries and reducing manual errors during the process. | Increased efficiency, increased accuracy, and quicker delivery of benefits payments to beneficiaries. | The AI system will read the signed and returned Limited Payability letter and automatically prepare a payment transaction using that data. | The AI system will read the signed and returned Limited Payability letter and automatically prepare a payment transaction using that data. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-4527 | Disability Benefits Document Classifier | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | As a Veteran submitting documents to support a claim, I want the VA to automatically classify my uploads so that I can avoid confusing dropdowns and feel confident my evidence will be reviewed and processed correctly. The burden currently rests with Veterans to classify/label their documents on VA.gov after manually uploading them. As a result: - Veterans face confusing submission experiences - Veterans overuse the "other/correspondence" document label - Veteran Service Representatives (VSRs) waste time opening mislabeled files in the Veterans Benefits Management System (VBMS) eFolder - Automation systems ignore incorrectly labeled evidence or misroute it (causing delays of up to 26 days) | Automating this manual task saves time for Veterans applying for benefits, VSRs processing claims, and increases throughput and accuracy of downstream automation systems. | For each document uploaded, our system returns the corresponding VA.gov/VBMS-defined document type. | For each document uploaded, our system returns the corresponding VA.gov/VBMS-defined document type. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5269 | Paid & Due Auto Calculation | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | The Paid & Due Audit calculation is currently a manual process that is both complicated and time consuming. However, the use of Agentic AI will allow VA to automate portions of the data collection and audit calculation process, accelerating the results of each Paid & Due Audit and lessening the impact of manual errors. | Accelerated data collection, increased accuracy, and quicker results for each Paid & Due Audit calculation. | The Agentic AI system will search and collect pertinent information related to the current Paid & Due Audit calculation. Once collected, the AI system will assist with some of the calculations and offer suggested actions for the user in order to better complete the Paid & Due Audit. | The Agentic AI system will search and collect pertinent information related to the current Paid & Due Audit calculation. Once collected, the AI system will assist with some of the calculations and offer suggested actions for the user in order to better complete the Paid & Due Audit. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-6276 | Summarization of Clinical Data | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Give a provider a summarized status of the patient and what has happened to the patient since they last saw the Veteran. | Decreased provider burden and increased time the provider can dedicate to the Veteran during the encounter through increased efficiency. | Summary of the patients clinical data. | Summary of the patients clinical data. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-817 | CCN Provider Directory | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Reduce the need for manual review of a large (1.5M+ providers, 9M+ provider services) dataset of provider information to categorize or perform data governance functions. | Maintaining a highly accurate and properly categorized dataset of VA and Community Providers actively providing healthcare to our Veterans ensures accurate and timely Veteran referrals, appointment scheduling, and facilitation of healthcare program service authorization. Additionally, applying AI to the consolidated Provider dataset allows critical federal review of provider activity and enables discovery of fraud, waste, and abuse for the program. | - Determination of the Harassment Prevention Program (HPP) criteria - Finding and retrieving providers based on their locations and medical/@dental services provided - Data quality and compliance with federal standards, such as address, zip codes, and rurality | - Determination of the Harassment Prevention Program (HPP) criteria - Finding and retrieving providers based on their locations and medical/@dental services provided - Data quality and compliance with federal standards, such as address, zip codes, and rurality | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-159 | Verkada Camera - OSLE | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Video search function inside a physical security camera solution. | AI will aid in the search of recorded video and images. This will decrease the number of man-hours required to search through these video and images. Additionally, this will enhance outcomes when searching for persons of interest (i.e. Missing Persons/Patients). | Video and images from the AI searches inside the physical security camera solutions. | Video and images from the AI searches inside the physical security camera solutions. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-6067 | Xtract WDS - OSLE | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | Heat map detection for weapon detection software. | The AI assisted Weapon Detection System will be used to detect weapons and alert staff. This would aid in ensuring the area is safe and secure for Veterans, Visitors, and Employees. | Areas of interest in a map of the body that indicates objects that have indicators aligned with known types of different weapons. | Areas of interest in a map of the body that indicates objects that have indicators aligned with known types of different weapons. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1594 | Automated Ratings Summarization | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | a) High-impact | High-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | We are attempting to reduce the claims backlog and processing time. Additionally, there is a large volume of documentation on a Veteran's record - this tool is intended to reduce the amount of time a claim processor spends sorting through documentation relevant to a Veteran's claimed issue. | - Increased efficiency and lower time to decision on a benefits claim for a Veteran - Improve the quality of decisions, which can ultimately result in a decrease in appeals | Identifies and generates summarizations or relevant documents from a Veteran's record, associated with the claim under review. | Identifies and generates summarizations or relevant documents from a Veteran's record, associated with the claim under review. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3480 | Smart Pension Automation | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | Reduce the overall average of days to complete a pension related benefit claim. Currently, there is a batch claim processing system in place with Pension Automation that leverages Camunda and is currently only able to process 300 claims/night due to constraints with the original solution. Due to the constraints, a large number of claims are offboarded each night for manual resolution. | The AI solution is expected to increase the number of claims awarded on a daily basis by 10% with the minimum viable product (MVP) solution and then continue to improve with iterative deliveries after the MVP solution. | The AI solution will output reporting data on which claims were awarded or off ramped on a daily basis. The solution will also provide auditable information on why a decision was made for a human to review. | The AI solution will output reporting data on which claims were awarded or off ramped on a daily basis. The solution will also provide auditable information on why a decision was made for a human to review. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-393 | Predictive Claims Processing Capability POC | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Faster claims decisions for Veterans by predicting/performing claims adjudication without the need for a rules engine | Increased efficiency, faster decisions for veterans, and improved veteran experience. | Proposed decision on Education Benefits | Proposed decision on Education Benefits | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5578 | AICES | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | 1. Reduce the backlog and processing times, minimize unnecessary in-person exams, and address workflow inefficiencies. 2. Reduce the burden of travel on veterans and expedite the time required for claims processing. During the period of performance, this effort will demonstrate the effectiveness of the proposed solution to develop Disability Benefits Questionnaire (DBQs) through Acceptable Clinical Evidence (ACE). | This initiative aims to: 1. Reduce the backlog and processing times by more efficiently using Acceptable Clinical Evidence (ACE), minimizing unnecessary in-person exams and addressing workflow inefficiencies. 2. Reduce VA costs due to reduced in-person exams. 3. Improve the Veteran experience by reducing the burden of travel on veterans and expediting the time required for claims processing. | The system indexes structured, semi-structured, and unstructured Veteran health and service record data, including diagnosis, severity, and service connection evidence from eFolders, metadata, and lay evidence. It supports Disability Benefits Questionnaire (DBQ) processing to generate SMART Claims with recommendations. Objective 1: Reduce Unnecessary In-Person Examinations through Evidence-Driven Triage and Workflow Automation Goal: Demonstrate the accuracy of the solution to properly execute ACE exams. Metric: Demonstrate the ability to produce 5000 ACE exams during the period of performance within a 72-hour delivery window, through expert review. Objective 2: Enhance the accuracy of documentation for ACE-generated DBQs Goal: Validate the capability of advanced OCR and NLP tools to correctly extract and populate data. Metric: Achieve at least a 96% accuracy rating for all ACE DBQs. | The system indexes structured, semi-structured, and unstructured Veteran health and service record data, including diagnosis, severity, and service connection evidence from eFolders, metadata, and lay evidence. It supports Disability Benefits Questionnaire (DBQ) processing to generate SMART Claims with recommendations. Objective 1: Reduce Unnecessary In-Person Examinations through Evidence-Driven Triage and Workflow Automation Goal: Demonstrate the accuracy of the solution to properly execute ACE exams. Metric: Demonstrate the ability to produce 5000 ACE exams during the period of performance within a 72-hour delivery window, through expert review. Objective 2: Enhance the accuracy of documentation for ACE-generated DBQs Goal: Validate the capability of advanced OCR and NLP tools to correctly extract and populate data. Metric: Achieve at least a 96% accuracy rating for all ACE DBQs. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-6965 | Smart Ratings Recommendation | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | Due to the rating decision's dependence on the preceding steps of the rating process, there is often a backlog of claims and significant rework required to complete a rating on a claim. A need exists for a specialized, automated solution to complete several pieces of the Rating process to expedite the delivery of benefits and awards to Veterans, and to reduce errors and delays resulting from remediation. | Smart Rating Recommendation empowers VSRs in the Ratings workflow to leverage AI-generated ratings of select claim and contention types in order to lower processing times for a Rating Decision to be made. This is done by allowing RVSRs to leverage AI analysis of selected claims and contentions, and provide a proposed Rating to the user to confirm or review manually themselves. | The output is a rating recommendation that the Ratings VSR can approve or reject. It is not finalized until the RVSR takes action. | The output is a rating recommendation that the Ratings VSR can approve or reject. It is not finalized until the RVSR takes action. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1442 | Roche Digital Pathology | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Decrease manual effort in analyzing and identifying tumors on slides. AI technology will inform the pathologists how much of the neoplastic cells are staining with such-such stain and give the percentage of tumor involvement - this is quite helpful in analyzing the involvement of prostate carcinoma in each of the prostate core. The pathologist will not perform much of the “manual labor” and be more efficient. | Reduce and prevent errors, enhance turnaround times, increase cost savings, and enhance patient care. | Slide image analysis - AI technology will inform the pathologists how much of the neoplastic cells are staining with such-such stain and gives the percentage of tumor involvement | Slide image analysis - AI technology will inform the pathologists how much of the neoplastic cells are staining with such-such stain and gives the percentage of tumor involvement | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1770 | Pangaea | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Identification of chronic disease and advanced guidelines for treatment and early intervention. | Early intervention and treatment of chronic disease can mitigate advanced symptoms of chronic disease. | Identification of patients more at risk for chronic disease. | Identification of patients more at risk for chronic disease. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1811 | Cogitativo | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Identify Veterans at risk for chronic diseases | Early detection and treatment of chronic diseases | Chronic disease diagnosis and treatment plans. | Chronic disease diagnosis and treatment plans. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2828 | Pharmacy AI Managed Inventory System (PHAIMIS) | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | PhAIMIS is intended to represent an overarching inventory management tool for pharmacy, and has various areas where AI could be leveraged: • Predictive analytics on product needs – establishing optimal Min/Max ; ROP/ROQ (reducing on hand stock while reducing out of stock) • Inventory forecasting and identification of backorder/ supply chain issues (backorders) • Recommendations for resolution of supply chain/ backorder issues • Pharmacoeconomic reviews and recommendations for cost savings with equivalent product conversions • Expiring product utilization optimization (leverage PINS or other to share product that will not be used prior to expiration with facilities that will) • Contracting compliance/order creation • Streamlining of administrative tasks such as B09 audit/reconciliations | Enhanced medication availability and cost savings as it relates to more efficient supply chain management | Inventory mapping, predictions, and decision support for medication procurement | Inventory mapping, predictions, and decision support for medication procurement | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2836 | Silverberry Surgery Planning AI | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | Patient education and engagement. Physician burden and improving physician patient relationship. The Silverberry Surgery Planning AI utilizes agents to perform tasks related to the perioperative process. Patient preparation and education, sterile processing and implant preparedness, intraoperative supply management, Post operative planning and discharge assistance. The Silverberry Surgery Planning AI utilizes agents to perform tasks related to the perioperative process. | Increased efficiency, improved outcomes for veterans, improved health literacy. | Outputs are direct Patient education and improved Veteran provider communication. Reducing the burden on providers by improving patient health literacy. | Outputs are direct Patient education and improved Veteran provider communication. Reducing the burden on providers by improving patient health literacy. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3082 | Teledermatology Clinical Outcome Classification | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | This seeks to develop an NLP LLM tool that reviews dermatology and teledermatology clinical notes in the EHR and classify them into a clinical outcome category. Natural language processing is used to determine qualitative outcomes for skin diseases where quantitative metrics don't exist. | Measure and improve skin disease outcomes for both telehealth and in-person care. | Classification of clinical outcomes | Classification of clinical outcomes | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3365 | Summarize Progress Notes | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Reduce the amount of time required for a healthcare provider to review medical records during an active case within the Clinical Contact Center. | - Reduced case time per call - Reduced hold time - Reduced queue abandonment rate - Improved Veteran satisfaction - Improved patient safety | A summary of the patient medical record with the most important elements presented first. | A summary of the patient medical record with the most important elements presented first. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-372 | Parable 3D Wound Care Management System | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Assist clinicians in capturing wound data (2D/3D) using computer vision to better manage the wound lifecycle for the patient. | Allows clinicians to manage wounds using very precise computer vision techniques, rather than rely on manual, error prone hand measurements. | 2D and 3D volumetric measurements are then computed based on the segmented 3D reconstruction to produce length, width, surface area, depth, and volume measurements. | 2D and 3D volumetric measurements are then computed based on the segmented 3D reconstruction to produce length, width, surface area, depth, and volume measurements. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4349 | Coronary Artery Calcium Model | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | This is operationalization of a Coronary Artery Calcium (CAC) scoring computer vision model that can predict cardiovascular disease risk. This will help clinicians identify and prioritize patients at high risk for cardiac events to intervene. The input is CT chest images and the output will be calcium scores and calcium masks that physicians can review to verify high patient cardiovascular risk. This will help clinicians identify and prioritize patients at high risk for cardiac events to intervene. It will improve the quality of veteran healthcare. | We expect this project to improve patient care outcomes by helping direct provider attention to patients that were identified as high-risk via findings. | The model will process medical images to produce a segmentation mask that shows providers whether or not there is disease and a calcium score that can help them find patients at high cardiovascular risk. | The model will process medical images to produce a segmentation mask that shows providers whether or not there is disease and a calcium score that can help them find patients at high cardiovascular risk. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4431 | Reportability of Cancer Diagnosis Information | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Try to identify if a cancer diagnosis is present in the pathology reports and what diagnosis the patients have, including histology, site, and subsite. | Improve patient outcomes, save cost, and increase efficiency: Information extracted by the system will be highlighted and shown within the original context for review by the coordinators, within health informatics tools currently in production. This will decrease the time for information gathering for coordinators and improve their efficiency. This information may also help identify Veterans with diagnoses of interest for review by the coordinator, allowing for additional resources to be activated to support the Veteran if they meet program criteria. | Whether a cancer diagnosis is present in the pathology reports and what diagnosis the patients have, including histology, site, and subsite. | Whether a cancer diagnosis is present in the pathology reports and what diagnosis the patients have, including histology, site, and subsite. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4439 | Predict Patient Suicide | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Prevention of Veteran suicide is VHA’s top clinical priority. We use AI to identify patients with key risk factors for death by suicide. It is a tool that can help point clinicians to potentially critical information that they might not otherwise have been aware of. | Improve ability to identify Veterans at high-risk for suicide and intervene to reduce the risk of suicide. | Identification of patients with suicide risk factors. | Identification of patients with suicide risk factors. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4554 | Extract patients’ diagnosis information and map the diagnosis to oncotree | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Extract patient diagnosis from unstructured patient notes and map the diagnosis to oncotree, because these information typically is extracted manually and it takes a long time to be put into the database. | This project could increase efficiency, save cost, and enhance patient outcomes. With real time diagnosis tracking, the national oncology program is able to better support the patient with cancer diagnosis and understand each veterans' individual needs. While AI extract the information from the unstructured notes, it speed up the review process, providers can save time to help more patients rather than taking a long time to review each notes. | Patient cancer diagnosis and the oncotree mapping | Patient cancer diagnosis and the oncotree mapping | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4595 | Cancer Patients Symptom Burden | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Attempt to extract patients' symptom burden information from unstructured notes to increase the speed of providers identifying patients that may benefit from additional support for symptom management or mental health. | Providers are able to identify patients that might benefit from additional support more quickly rather than reviewing entire notes as AI would highlight the parts that are relevant to the symptoms. This increases efficiency and saves cost as well. With providers more aware of patients' symptoms, they can spend less time and effort identifying patients that suffer from high symptom burdens. Providers would be more able to reach out and support the patients in need of support, therefore improving patient outcomes. | Information on patient symptom burden, including symptoms related to mental health needs of cancer patients | Information on patient symptom burden, including symptoms related to mental health needs of cancer patients | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-462 | Billing Claims Prediction | a) Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The AI solution is designed to help the VHA Office of Finance, Revenue Operations in several ways: Identify missed revenue opportunities, ensuring no income potential is lost. Highlight, predict and analyze trends in actionable denials, helping to resolve systemic issues. Reduce the administrative burden on staff by simplifying processes. Develop a national denial management analytics platform for consistent and effective financial operations across the VA healthcare network. | Increased revenue, decrease in denials, and increase efficiency and staff satisfaction. | Predictions: ex. Forecasts about future events or trends based on historical data, such as predicting patient readmission rates or financial performance. Trends and insights: ex. Providing analysis and visualization of trends over time, such as tracking billing errors. Recommendations ex. Offering suggestions based on data analysis, such as recommending treatment plans for patients or optimizing resource allocation Visibility of analytics Automated actions: ex. Performing tasks automatically based on AI analysis, such as automating denial management processes or patient appointment scheduling. | Predictions: ex. Forecasts about future events or trends based on historical data, such as predicting patient readmission rates or financial performance. Trends and insights: ex. Providing analysis and visualization of trends over time, such as tracking billing errors. Recommendations ex. Offering suggestions based on data analysis, such as recommending treatment plans for patients or optimizing resource allocation Visibility of analytics Automated actions: ex. Performing tasks automatically based on AI analysis, such as automating denial management processes or patient appointment scheduling. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5087 | Koios | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | FDA Cleared. Used with Automated Breast Ultrasound (ABUS). Classifies breast ultrasound lesions and automates reporting for improved efficiency and streamlined workflows. Increased sensitivity and specificity with reduced variability. | Reduces unnecessary treatments (existing BI-RADS 3's can be accurately and safely downgraded) and provides probability of malignancy aligned to BI-RADS categorization. | Classifies breast ultrasound lesions | Classifies breast ultrasound lesions | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5579 | Fall Prediction Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The Morse Fall Scale (MFS), as it is currently utilized, often fails to accurately assess fall risk, highlighting the need for more precise and timely fall risk assessment tools to prevent falls. | The goal is to provide a more accurate fall assessment score than the existing Morse Fall Score which could improve patient safety outcomes. Better assessing fall risk and applying proper fall prevention interventions would also translate to cost avoidance and savings. | A numerical Fall Risk Score for fall risk stratification. | A numerical Fall Risk Score for fall risk stratification. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1082 | Healthcare Text Anonymizer | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | This application reduces the manual processing time for anonymization and improves the accuracy of removing identifiable information in healthcare text. This is important because one of the applications of this project is to publicly release data on the medical impact of the new electronic health records system, Oracle. As Oracle gets rolled out to more facilities, there is a need to release more anonymized incident reports of Oracle's impact on patient health. Manually anonymizing all of these reports is not scalable nor sustainable, which is why this AI application helps. | Increased efficiency and increased security of patient Personally Identifiable Information (PII) | Excel file with additional column(s) of anonymized text - this application removes identifying information (name, addresses, etc.) in healthcare text. | Excel file with additional column(s) of anonymized text - this application removes identifying information (name, addresses, etc.) in healthcare text. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-13 | EEO Consultation and Guidance | a) Pre-deployment – The use case is in a development or acquisition status. | Human Resources | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | As you see a decrease in the amount of EO program managers within the VA, it'd be helpful to utilize AI to manage and analyze employee complaints, including those related to Equal Employment Opportunity (EEO). We want to have 24 hour EEO response to employee questions surrounding discrimination and harassment. The AI tool will not be equipped to provide legal or counseling services. It can receive Complaint Intake and Triage, provide Data Analysis, Sentiment Analysis, and Providing Information and Resources. | Providing useful information in a timely manner. This is not legal advice. | Narrow the scope of an employee's complaint and provide the necessary resource and information. | Narrow the scope of an employee's complaint and provide the necessary resource and information. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1430 | Deduplication in Record Management | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | - Reduce manual process of opening record for review. - Triage urgent veteran follow up needs by recognizing urgent or routine in boxes. - Kick back record requests missing important clerical details like signatures, names, or ICD 10 codes. - Summarize Physical Therapy (PT) care records for VA practitioners to act on. - Reduce clinical errors and manpower / manual resources in clinically evaluating forms - Reduce clinical manpower and cloud storage in uploading duplicate documents | - Decrease clerical and clinical costs related to record management. - Increase Registered Nurse (RN) licensure to provide direct care. Too many RNs are pulled into record management, which takes them away from frontline needs. - Decrease Veteran harm or death - Increase Veteran satisfaction - Decrease burnout | An overlaying device reads all e-faxed records coming in. It recognizes: - If the record is urgent (RFS) or routine - The date and triages records that need more immediate action - Signatures. This product has been created. The final product will include a summary of details for rapid action. | An overlaying device reads all e-faxed records coming in. It recognizes: - If the record is urgent (RFS) or routine - The date and triages records that need more immediate action - Signatures. This product has been created. The final product will include a summary of details for rapid action. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2142 | AI in Digital Pathology | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Galen Second Read's key functionalities include image upload and analysis, flagging slides with high likelihood to contain AdC and displaying WSIs uploaded to the system along with the analysis results. Flagged findings constitute a recommendation for additional review by a pathologist. | Increased productivity which will provide pathologists more time with complex cases | The Galen Second Read is cloud-hosted and utilizes scanned whole slide images (WSIs) generated from the Philips Ultra Fast scanner (UFS). For each WSI input, the Galen Second Read automatically analyzes the WSI and outputs the following: - Binary classification of the likelihood (high/low) to contain prostate adenocarcinoma (AdC) based on a predetermined threshold of the neural network output. - For slides classified as high likelihood to contain AdC, slide-level findings are flagged and visualized (AdC score and heatmap) for additional review by a pathologist. - For slides classified as low likelihood to contain AdC, no additional output is available. Galen Second Read's key functionalities include image upload and analysis, flagging slides with high likelihood to contain AdC and displaying WSIs uploaded to the system along with the analysis results. Flagged findings constitute a recommendation for additional review by a pathologist. | The Galen Second Read is cloud-hosted and utilizes scanned whole slide images (WSIs) generated from the Philips Ultra Fast scanner (UFS). For each WSI input, the Galen Second Read automatically analyzes the WSI and outputs the following: - Binary classification of the likelihood (high/low) to contain prostate adenocarcinoma (AdC) based on a predetermined threshold of the neural network output. - For slides classified as high likelihood to contain AdC, slide-level findings are flagged and visualized (AdC score and heatmap) for additional review by a pathologist. - For slides classified as low likelihood to contain AdC, no additional output is available. Galen Second Read's key functionalities include image upload and analysis, flagging slides with high likelihood to contain AdC and displaying WSIs uploaded to the system along with the analysis results. Flagged findings constitute a recommendation for additional review by a pathologist. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2407 | Lambient | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Clinical burnout and decrease amount of late night time for documentation. | Improve clinical staff satisfaction by letting them spend time with the patient and less time documenting the encounter. This would hopefully improve patient satisfaction. | Generating text by LLM: Lambient is a AI scribe tool that will allow clinical staff to have a ambient listening tool within a patient session. After the session, Lambient is able to draft a clinical note for which a provider can potentially enter into the EHR. | Generating text by LLM: Lambient is a AI scribe tool that will allow clinical staff to have a ambient listening tool within a patient session. After the session, Lambient is able to draft a clinical note for which a provider can potentially enter into the EHR. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2660 | Community Care Records Retrieval | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | VA care teams often struggle to access Community Care documents in a timely manner. When access is granted, the process is labor-intensive, requires significant manual administration by clinical staff, and is not always successful. Veterans may also be asked to bring in a copy of their records when VA providers are unable to obtain them. This negatively impacts Veterans’ care coordination and makes them lose trust in VA. | Increased efficiency in coordinating VA community care documentation, enhancing Veteran satisfaction with care and increasing quality of care and Veteran safety. | Requests for community care records or retrieval and ingestion of community care records. | Requests for community care records or retrieval and ingestion of community care records. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2673 | CPT Worksheet Helper | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Patients struggle to complete Cognitive Processing Therapy (CPT) homework and want feedback beyond when they see their therapist. Research has shown that homework frequency and quality can support greater improvement. | Patients may remain more engaged in treatment and experience greater improvement. | LLM will provide the user with encouragement, feedback, and guidance. Another LLM may be used to detect risk and respond appropriately | LLM will provide the user with encouragement, feedback, and guidance. Another LLM may be used to detect risk and respond appropriately | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2701 | AI-Assisted View Alert Management | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | By analyzing and prioritizing alerts, this tool will reduce unnecessary notifications and help clinicians focus on what matters most, enhancing care for Veterans. | - Reduce alert fatigue and clinician burnout by minimizing unnecessary distractions. - Improve clinical efficiency and workflows through targeted triage and routing of critical alerts. - Enhance patient safety and resource allocation by prioritizing meaningful and actionable view alerts. - Increase return on existing technology investments by optimizing the performance of the current alerting functionality. | Switchboard will build new AI models trained on annotated message data. Message annotation will be conducted in a structured process using a dataset of View Alerts (subject to design). The end-state AI models will apply the following label types to messages: - Label assignment of 5 core labels: Clinically Urgent, Clinician, Scheduling, Form, and Refill - Additional label assignment of VA-specific labels based on review of the dataset where patterns are identified. These will include: a) No potential provider action b) Medication nearing expiration c) Non-critical alerts for inactive providers Or additional view alert labels that are relevant to the VA workflow: - Flagging of clinically irrelevant View Alerts - Flagging of duplicative View Alerts | Switchboard will build new AI models trained on annotated message data. Message annotation will be conducted in a structured process using a dataset of View Alerts (subject to design). The end-state AI models will apply the following label types to messages: - Label assignment of 5 core labels: Clinically Urgent, Clinician, Scheduling, Form, and Refill - Additional label assignment of VA-specific labels based on review of the dataset where patterns are identified. These will include: a) No potential provider action b) Medication nearing expiration c) Non-critical alerts for inactive providers Or additional view alert labels that are relevant to the VA workflow: - Flagging of clinically irrelevant View Alerts - Flagging of duplicative View Alerts | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2865 | Ephesoft Fax Automation | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Ensure timely healthcare by increasing speed of processing incoming faxes so Request For Services (RFS) and medical records can be clinically reviewed and uploaded to patient electronic health records. | Estimated cost savings of $2 million while enhancing the timeliness of care, standardizing processes and reducing human error. | Sorts incoming faxes for humans to review, which will then be uploaded to patient electronic health records. | Sorts incoming faxes for humans to review, which will then be uploaded to patient electronic health records. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2906 | Natural Language Online Workflow (NOW) | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | - Information Gaps and Misinformation: Veterans and staff often have trouble finding accurate, consistent answers about vasectomy procedures, eligibility, risks, and next steps. Policies and contacts change and not everyone knows where to look. - Inefficient Consult Request Process: Requesting a consult can be confusing, requiring multiple phone calls, paper forms, or follow-ups that delay care. Veterans may give up or receive the wrong info, which slows down the process and frustrates everyone. - Inefficient Use of Staff Time on Routine Questions: Clinical and administrative staff spend a lot of time answering the same questions or redirecting Veterans to the right place. This eats up resources that could be used for more complex cases. - Missed or Delayed Care: If eligibility is unclear, or the Veteran doesn’t know how to proceed, appointments and procedures get delayed. This can lead to worse outcomes and more work down the line. - Inconsistent Communication: Without a standard source, Veterans might get different answers depending on who they ask or what site they visit. | - Centralizes reliable information in one place and it is always up-to-date. - Removes ambiguity for users by guiding them through eligibility and next steps. - Streamlines the consult request process, making it easy to start and track the process. - Reduces repetitive work for staff, so they can focus on higher-priority needs. - Flags new or complex questions for human follow-up, so nothing falls through the cracks. | - Personalized Responses: the chatbot gives tailored answers to user questions about vasectomy procedures, eligibility, risks, recovery, and next steps—based on the latest VA policies and user input. - eligibility Screening Results: it guides users through a Screening process and outputs a clear summary: whether the Veteran appears eligible, concerns or risks to discuss with a provider, and any required follow-@up steps -@ consult request Guidance: it outputs step-@by-step instructions or links to start a vasectomy consult, including local facility contact info and how to proceed based on the user’s location. -@ summary of user Input: the chatbot can output a formatted summary of the Veteran’s Responses (e.g., medical history, concerns) for review by a clinician or for the user to save. - Referral or Escalation Notices: If the chatbot can’t answer a question, it outputs a message acknowledging the limit and Flags the question for human follow-up, ensuring nothing falls through the cracks. - Data Logs and Analytics: on the backend, it outputs Logs of common questions, user needs, and potential gaps in information. This helps the agency track trends, monitor usage, and improve over time. | - Personalized Responses: the chatbot gives tailored answers to user questions about vasectomy procedures, eligibility, risks, recovery, and next steps—based on the latest VA policies and user input. - eligibility Screening Results: it guides users through a Screening process and outputs a clear summary: whether the Veteran appears eligible, concerns or risks to discuss with a provider, and any required follow-@up steps -@ consult request Guidance: it outputs step-@by-step instructions or links to start a vasectomy consult, including local facility contact info and how to proceed based on the user’s location. -@ summary of user Input: the chatbot can output a formatted summary of the Veteran’s Responses (e.g., medical history, concerns) for review by a clinician or for the user to save. - Referral or Escalation Notices: If the chatbot can’t answer a question, it outputs a message acknowledging the limit and Flags the question for human follow-up, ensuring nothing falls through the cracks. - Data Logs and Analytics: on the backend, it outputs Logs of common questions, user needs, and potential gaps in information. This helps the agency track trends, monitor usage, and improve over time. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3149 | Ambient Scribing with Patient Chart Exploration | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | The typical clinical workflow in primary care is complex, with numerous interruptions and areas of friction. I want to understand how AI could augment my flow by replacing the lower-tier cognitive work and allowing me to focus on making decisions and providing counseling. Much of this cognitive load is now invisible, even to myself. For example, for a patient with diabetes, I may spend time hunting down the appropriate labs and medication history as well as summarizing the subjective information they've shared over the years about the difficulties they've had in adherence. If this could be done for me, similar to what a clinical assistant or medical trainee may do, I would be better able to care for my patient in the here and now. There are many people working on optimizing ambient scribing and AI summarization/querying. My goal is not to create a state of the art version of either, but to focus on understanding how basic versions of these tools could help me. That may include context-aware changes in data presentation, the use of visualization, and improvements in search and browsing. | This tool could help replace the lower-tier cognitive work and allow me to focus on making decisions, providing counseling, and overall, be better able to care for my patients. The hope would be to create a framework that would be available through Clinical Decision Support (CDS) or another internal VA platform, allowing clinicians to customize chart data for their own workflows. Improved scribing, natural language processing and AI summarization could be swapped in as they develop further over time. | Context- and diagnosis- aware chart browsing, AI summarization of chart data, and draft clinical notes based on chart data and transcripts. | Context- and diagnosis- aware chart browsing, AI summarization of chart data, and draft clinical notes based on chart data and transcripts. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3415 | JLV Booster | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | It is difficult for physicians to figure out the details of care that occurred outside the VA. Even when the data exists and can be viewed in Joint Legacy Viewer (JLV), the process of viewing records is time consuming. AI can help summarize the existing external data such that physicians have an idea of what information exists and where to find the full details (no use of AI), if needed. | Improved patient outcomes and increased physician efficiency | Summary of external medical record | Summary of external medical record | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3521 | Veterans Life Tailored | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | a) High-impact | High-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | The VLT (Veterans Life Tailored) project, branded as VLT, leverages advanced AI technologies to tailor benefits and support services specifically for veterans. This project harnesses the power of artificial intelligence to analyze vast amounts of data, providing personalized recommendations and streamlined access to a wide array of veteran benefits. Through the use of AI, this product will provide additional information to veterans based on service information. We will suggest benefits with the goal of supporting the application process in the future. | - Improved Veteran and family/beneficiary experience - Increased number of applications and approvals for benefits | Output will be driven by the AI - it will be a Graphical User Interface (GUI) with information on benefits and links for more information. | Output will be driven by the AI - it will be a Graphical User Interface (GUI) with information on benefits and links for more information. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3573 | Claims Adjudication and Financial Decision-Making | a) Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | VA staff spend large amounts of time gathering and reviewing information to process payments and recover revenue. This slows down financial decisions and delays service to Veterans and providers. The AI platform brings the necessary data together, highlights key issues, and streamlines next steps so staff can work faster and focus on higher value decisions. | It will deliver cost savings and efficiency gains, strengthening VA's overall financial stewardship. | The system outputs are claim classifications, eligibility and coding checks, consolidated documentation, risk flags, draft appeals packages, and dashboard reports, which are all presented to staff as decision support rather than final decisions. | The system outputs are claim classifications, eligibility and coding checks, consolidated documentation, risk flags, draft appeals packages, and dashboard reports, which are all presented to staff as decision support rather than final decisions. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3617 | Mindray TE X | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The TEX20/TEX20 Pro/TEX20S/TEX20T/TEX20 Exp/TEX20 Elite/TEX10/ TEX10 Pro/TEX10S/TEX10T/TEX10 Exp/TEX10 Elite/TE X/TE X Lite Diagnostic Ultrasound System is applicable for adults, pregnant women, pediatric patients and neonates. It is intended for use in Ophthalmic, fetal, abdominal, Intra-operative (abdominal, thoracic, and vascular), Laparoscopic, pediatric, small organ(breast, thyroid, testes), neonatal and adult cephalic, trans_x0002_rectal, trans-vaginal, musculo-skeletal(conventional, superficial), Thoracic/Pleural (For detection of fluid and pleural motion/sliding.), adult and pediatric cardiac, trans-esoph. (Cardiac), peripheral vessel, and urology exams. | Accelerate ultrasound workflows. | Provides automated measurements of imaged structures and calculations of quantities of interest. | Provides automated measurements of imaged structures and calculations of quantities of interest. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3732 | Modeling to Learn (MTL) | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Modeling to Learn is designed to address the limited start, flow, and dose of evidence-based psychotherapy and evidence-based pharmacotherapy for Post Traumatic Stress Disorder (PTSD), depression, alcohol use disorder and opioid use disorders. | Improved patient safety and recovery and efficient use of staff and financial resources to deliver timely access to VHA’s highest quality behavioral healthcare. | Care improvement priorities and recommendations identified with Modeling to Learn leverage local VA Medical Center (VAMC) or Community-based Outpatient Clinic (CBOC) a) appointment and patient accumulations, b) rate of care flows and c) frequently made care decisions (e.g., return-to-clinic orders) to achieve policy-consistent, evidence-based episodes of care within local workforce constraints. Decision-makers can review simulated future trends for the next 2 years based on the last 2 years and examine alternative decisions most likely to improve local care to meet VHA standards for access and quality. The Modeling to Learn data user interface follows these steps to prioritize improvement decisions: Prioritize a local workflow, Prioritize the highest volume or greatest increase in diagnoses, Prioritize imbalances between diagnoses and policy-consistent evidence-based care flow, Prioritize evidence-based psychotherapy or pharmacotherapy start and dose imbalance, Prioritize time to detect patient improvement or risk using measurement-based care. The Modeling to Learn simulation user interface follows these add'l steps to prioritize improvement decisions: Prioritize medication management, psychotherapy, team care or team flow, Prioritize evidence-based care episodes balancing flow, Prioritize leverage of small new decisions with large beneficial improvements for the local patient population, Explain why the decision improves care using feedback. | Care improvement priorities and recommendations identified with Modeling to Learn leverage local VA Medical Center (VAMC) or Community-based Outpatient Clinic (CBOC) a) appointment and patient accumulations, b) rate of care flows and c) frequently made care decisions (e.g., return-to-clinic orders) to achieve policy-consistent, evidence-based episodes of care within local workforce constraints. Decision-makers can review simulated future trends for the next 2 years based on the last 2 years and examine alternative decisions most likely to improve local care to meet VHA standards for access and quality. The Modeling to Learn data user interface follows these steps to prioritize improvement decisions: Prioritize a local workflow, Prioritize the highest volume or greatest increase in diagnoses, Prioritize imbalances between diagnoses and policy-consistent evidence-based care flow, Prioritize evidence-based psychotherapy or pharmacotherapy start and dose imbalance, Prioritize time to detect patient improvement or risk using measurement-based care. The Modeling to Learn simulation user interface follows these add'l steps to prioritize improvement decisions: Prioritize medication management, psychotherapy, team care or team flow, Prioritize evidence-based care episodes balancing flow, Prioritize leverage of small new decisions with large beneficial improvements for the local patient population, Explain why the decision improves care using feedback. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3829 | Nursing Career Path Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Human Resources | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | It will solve several critical leadership challenges within the VA system, including: - Identification of leadership potential - Workforce optimization - Centralized training and development resources properly matched with different levels of leadership stages - Support for leadership talent identification (qualified leaders to assume detail position) - Enhanced veteran centric care and operational efficiency - Cost-efficiency (recruitment, retention) The tool aims to systematically review staff qualifications, identify leadership levels, collate VA-wide resources, and assist leadership in talent identification and assignment for vacant leadership positions. By leveraging AI technology, the VA can ensure the right nursing talent is developed and effectively placed in leadership roles, ultimately improving service delivery and staff satisfaction. | Provide a structured and data-driven approach to nursing career development and succession planning within the VA. This tool aims to create a well-organized and efficient process for identifying, developing, and placing nursing (potential) leaders within the VA, ultimately enhancing patient care, reducing costs, and improving overall operational efficiency. It will also better align staff qualifications with leadership roles, improve succession planning, reduce gaps in leadership positions, and increased staff engagement and satisfaction through clear career progression paths. | - Review and analyze staff qualifications and talent to identify leadership potential. Categorize staff into leadership levels ranging from emerging leaders to senior leaders. - Collate and centralize educational and training resources from the Talent Management System (TMS), Veterans Integrated Service Networks (VISN), and individual VA site training programs (e.g., Nurse Manager Passport, Nurse Manager Academy, and Chief Nurse Mentoring program). - Match staff and leader for internal/external leadership mentorship program potential (virtual or face-to-face) -Serve as a resource for leadership to support talent identification such as suitable candidates for detail assignments for vacant leadership positions | - Review and analyze staff qualifications and talent to identify leadership potential. Categorize staff into leadership levels ranging from emerging leaders to senior leaders. - Collate and centralize educational and training resources from the Talent Management System (TMS), Veterans Integrated Service Networks (VISN), and individual VA site training programs (e.g., Nurse Manager Passport, Nurse Manager Academy, and Chief Nurse Mentoring program). - Match staff and leader for internal/external leadership mentorship program potential (virtual or face-to-face) -Serve as a resource for leadership to support talent identification such as suitable candidates for detail assignments for vacant leadership positions | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-4412 | Veteran Rehabilitation Services | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Make the connection of veterans to rehabilitation services streamlined and more efficient. Gather information over multiple platforms and create a timeline of service to better inform care and delineate access to additional services. | By gathering information over multiple platforms and creating a service timeline to better inform care and delineate access to additional services, we can streamline care processes, efficiently deliver services, and ensure veterans are not lost in transitions of care. | Services already utilized, including but not limited to, physical therapy care, delivery of durable medical equipment and creation of hospitalization summaries | Services already utilized, including but not limited to, physical therapy care, delivery of durable medical equipment and creation of hospitalization summaries | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-4632 | Freed AI Medical Scribe | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Increase efficiency of our Primary Care providers. The AI scribe records the Veteran and provider conversation during an exam - it eliminates unnecessary parts of the exam conversation and provides a written report for the provider to edit and copy into the Veterans chart. The AI does not make a diagnosis, it just provides the written summary that the provider uses to complete their encounter and note. | - Increased efficiency by having a written record of the exam. - Enhanced patient outcomes as the scribe records things that a provider might have missed in the conversation or coming back to the note/encounter the next day. - Lower missed opportunities for the provider. | The written output is an editable report that categorizes the visit into the following fields: Chief Complaint, History of Present Illness, Current Regimen, Past Health, Systems Review, Personal and Family History, Physical Exam, Formulation, and Plan. However, users can create their own templates and customize their note formats for their specific needs, including specialty needs. | The written output is an editable report that categorizes the visit into the following fields: Chief Complaint, History of Present Illness, Current Regimen, Past Health, Systems Review, Personal and Family History, Physical Exam, Formulation, and Plan. However, users can create their own templates and customize their note formats for their specific needs, including specialty needs. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-4714 | Improve Care for Opioid Use Disorder | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | It is designed to improve treatment retention among veterans receiving medication treatment for opioid use disorder. | The hope is the AI will improve treatment retention, and therefore, patient outcomes such as decreased overdose rates and improved quality of life. | The outputs of the AI system will be the predicted probability of medication treatment retention and the predicted probability of overdose. | The outputs of the AI system will be the predicted probability of medication treatment retention and the predicted probability of overdose. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-4942 | Venous Care Pathway Improvement | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Care pathways for venous diseases (e.g. deep vein thrombosis, pulmonary embolism, post thrombotic syndrome, superficial venous insufficiency, phlebolymphedema, and venous leg ulceration) are inconsistent and inefficient, leading to poor clinical outcomes and losing economic value statements for health payers and providers. Impact Health will work directly with VHA employees to standardize care protocols for selected venous diseases and implement a digital platform for Veterans, care providers, and hospital executives to support and view results of the improved protocol. | The digital platform empowers Veterans and providers with tools that include but aren’t limited to: track clinical outcomes, view longitudinal venous care progress, communicate remotely, manage care tasks for Veterans, access educational resources, prioritize Veterans based on status updates, collect research-grade clinical and economic data, and provide analytics of clinical and economic benefit of improved VHA venous pathways compared to the historical standard of care in VHA health systems. Hospital executives will also have access to a dashboard that demonstrates the Veteran population with venous diseases, population-level clinical outcome progress, recruitment statistics, facility economics, and more. Streamlining protocols for diagnosis, stratification, referrals, treatment, and post-treatment care will reduce health worker burden, create economic efficiencies, and significantly improve Veteran health outcomes. Providing a digital health platform that reinforces the protocols will facilitate better provider support for Veterans, empower Veterans to manage their own diseases, and will collect real-world data to validate the benefit of the improved protocol. | Evidence-based and cost-effective clinical care pathways, including risk stratification, guidance on the next diagnostic and therapeutic interventions, remote patient monitoring, and identification of barriers to access to care. | Evidence-based and cost-effective clinical care pathways, including risk stratification, guidance on the next diagnostic and therapeutic interventions, remote patient monitoring, and identification of barriers to access to care. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-4951 | Clinical Trial Matching | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | The trial matching rate at the VA is very low. Many patients aren't aware of the possible trial opportunities. National Oncology Program has clinical trial coordinator manually reviewing patients' information and trial information to decide which trials are potential matches for the patients that are interested. This process is very time consuming, usually takes over an hour per patient. We intend to use AI to speed up the process and recommend potential trials, so that the trial coordinator is able to help more patients with the same amount of time. The tool will match patients with clinical trials that they are potentially eligible for. | This tool can enhance patient outcomes, save cost, and increase efficiency. Patients are more likely to enroll in clinical trials, the trial coordinator would be able to help more patients with the same amount of time, and clinical trials could receive a higher enrollment rate. | The patient ratings for each clinical trial; whether patients are a strong match, a potential match, or not a match; the reasons for the match determination. | The patient ratings for each clinical trial; whether patients are a strong match, a potential match, or not a match; the reasons for the match determination. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5124 | ML Model for medication prediction | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Pharmacy inventory management | - Removes ambiguity for users by guiding them through eligibility and next steps. | Outliers in drug purchasing history (e.g. sudden increase in purchasing of certain high cost chemotherapeutics) Decreased utilization of certain therapies and increases in the utilization of therapeutic alternatives, whether costly or less costly. Reconcile drug shortage data that may explain purchasing of therapeutic alternatives. | Outliers in drug purchasing history (e.g. sudden increase in purchasing of certain high cost chemotherapeutics) Decreased utilization of certain therapies and increases in the utilization of therapeutic alternatives, whether costly or less costly. Reconcile drug shortage data that may explain purchasing of therapeutic alternatives. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5182 | Leveraging Acoustic-Linguistic Analytics and Social Determinants to Enhance Suicide Prevention Efforts in Veterans Crisis Line Interventions | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | This project leverages machine learning (ML) and data analytics to enhance suicide prevention efforts by improving risk prediction, crisis intervention effectiveness, and understanding of social and environmental factors influencing Veteran suicide risk. It will develop a data processing pipeline to extract and analyze VCL call data directly from call audio recordings, enabling multimodal ML models that integrate linguistic, acoustic, and contextual features to identify imminent suicide risk. Additionally, it will evaluate crisis intervention effectiveness and link VCL call data with external datasets to assess environmental stressors such as noise pollution. Finally, models will be validated on a larger dataset within the Veterans Affairs (VA) Cloud to ensure scalability and internally-sustainable integration into VCL workflows, delivering artificial intelligence (AI)-driven decision-support tools and actionable policy insights to improve crisis response strategies. | Improve VCL protocols and procedures; improve suicide prevention efforts; increase efficiency; enhance patient outcomes. | VCL call data analysis to identify imminent suicide risk and evaluate crisis intervention effectiveness. | VCL call data analysis to identify imminent suicide risk and evaluate crisis intervention effectiveness. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5693 | VAHC CRM- Veteran Self-Service | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | The Clinical Contact Center (CCC) receives 45 million calls per year. At this time the CCC has a queue abandonment rate of 9.4% which is significantly higher than the national average for contact centers. AI reduce the number of abandoned calls. | The CCC receives 45 million calls per year. At this time the CCC has a queue abandonment rate of 9.4% which is significantly higher than the national average for contact centers. AI will allow Veterans to get answers to commonly asked questions and access basic services which will allow Medical Service Assistants (MSAs) to answer more complex calls, reducing hold times for Veterans and reducing the number of abandoned calls. | Refilled prescriptions; answers to commonly asked questions; decisions to escalate a contact to a triage nurse. | Refilled prescriptions; answers to commonly asked questions; decisions to escalate a contact to a triage nurse. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5821 | Evidence, Policy, and Implementation Center (EPIC) QUERI | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Provider burnout due to medical documentation burden, which can put Veteran care at risk if unaddressed. | Increased efficiency. | An implementation toolkit to support scale up and spread of ambient dictation tools throughout the national VA healthcare system. | An implementation toolkit to support scale up and spread of ambient dictation tools throughout the national VA healthcare system. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5944 | PowerScribe One | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | It enhances existing radiology dictation and report creation software by providing AI findings and narrative text that help automate and augment reporting. Machine learning and advanced language understanding technologies derive meaning from the data to drive workflows, meaning less work for the radiologist and more consumable, meaningful reports to inform care decisions. | It increases the radiologists' efficiency, automates report writing, and enhances patient care decisions. | These solutions interpret your images and provide the radiologist with different findings, such as lung nodules, fractures, brain bleeds, liver lesions, etc. | These solutions interpret your images and provide the radiologist with different findings, such as lung nodules, fractures, brain bleeds, liver lesions, etc. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-6055 | Community Care Clinical Documents | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | Thousands of Veterans receive care in the community in both unscheduled urgent/acute care settings and scheduled care settings. Following these encounters, VA must receive clinical documentation describing the encounter and address follow-up care needs. The current process for this is manual, inefficient, and not always effective at timely documentation retrieval. This results in delayed care for Veterans and duplication of services. | Solving this problem will improve care quality for Veterans, decrease administrative burden on clinicians, automate manual processes, and improve workforce efficiency, resulting in decreased burnout and overtime expenditures. | This product will automate identification, retrieval, and notification in the context of community care clinical documents. | This product will automate identification, retrieval, and notification in the context of community care clinical documents. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-6373 | Medsafely | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Reduce the length of time it takes to perform medications reconciliation. | - Save cost and time - with just 3 pharmacists using the product, net savings are $5,388.00. - Increase accuracy, safety, and employee satisfaction. | Provides a clear list of medications added, removed, and changed. Also, looks at medications for several factors that can cause patient harm and provides alerts to the physician (i.e. Beers list, Renal Dosing, Falls, ACB calculator) | Provides a clear list of medications added, removed, and changed. Also, looks at medications for several factors that can cause patient harm and provides alerts to the physician (i.e. Beers list, Renal Dosing, Falls, ACB calculator) | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-6638 | Predict Septic Shock in ICU Patients | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Reduce ICU mortality and morbidity related to septic shock and respiratory failure. Provide nurses and physicians with the opportunity to intervene earlier in the course of these high mortality, high morbidity conditions. | Improved patient outcomes through reductions in mortality and morbidity. Costs savings related to lower length of stay associated with these conditions. | Primary outputs are risk measurements of how likely an ICU patient will develop septic shock or respiratory failure within the next 48 hours. | Primary outputs are risk measurements of how likely an ICU patient will develop septic shock or respiratory failure within the next 48 hours. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-6744 | Romexis Dental Imaging Software | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Automates process and speeds up dental x-ray reads. | Improves workflow, enhances dental implant planning, and automates tasks | Automatic segmentation of anatomical structures, intelligent superimposition of CBCT and intraoral scans, and automated implant planning | Automatic segmentation of anatomical structures, intelligent superimposition of CBCT and intraoral scans, and automated implant planning | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-764 | EEV/OHI Document Types | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | Increase the speed and accuracy of identifying document types and routing thereof. | Enhance Veteran and Veteran Family Member experience with claims processing to ensure timely and accurate claims resolutions. | Correct document types and routing of those documents for further processing. | Correct document types and routing of those documents for further processing. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-7910 | Electronic Mammogram Reporting (eMAMR) | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Clinical data extraction from diverse and unstructured sources remains a bottleneck for healthcare providers and agencies. For example, providers may need to re-enter the same patient information across different consults, which is inefficient and takes away valuable clinical time. Existing processes are manual, time-consuming, and prone to errors, which leads to delays in integrating critical patient information (such as PDFs and scanned records) into Electronic Health Record (EHR) systems like CPRS/VistA. This limits care coordination, slows decision-making, and increases administrative workload. The proposed AI solution will automate the workflow for clinical data extraction and integration using OpenAI’s GPT-4o API for computer vision and generative tasks. Leveraging Databricks and AWS, the system processes PDFs and images—extracting clinical data, converting it to structured formats (JSON), and transforming it into standardized clinical notes (TIU/SMART notes) for seamless integration into CPRS. | - Efficiency Gains: Automation dramatically reduces manual data entry and reconciliation, saving time for clinical staff. - Improved Data Quality: AI-driven extraction increases accuracy, consistency, and completeness of patient records. - Faster Care Coordination: Rapid data integration leads to quicker and more informed clinical decisions, supporting timely patient care. - Higher Scalability and Support of VA Mission: For VA, these outcomes will directly advance its mission to provide faster, safer, and more connected healthcare to the Veterans. | - Extracted JSON Files: Clinical data extracted from incoming fax/PDF images using OpenAI GPT-4o. - TIU/SMART Notes: Automated note creation in CPRS/VistA with mirrored SMART notes and health factors. - Consult Closure Actions: Automated documentation supporting the closure of consults. - Enriched Clinical Data: Data contextualized for clinical use and stored for retrieval in Databricks and AWS S3. | - Extracted JSON Files: Clinical data extracted from incoming fax/PDF images using OpenAI GPT-4o. - TIU/SMART Notes: Automated note creation in CPRS/VistA with mirrored SMART notes and health factors. - Consult Closure Actions: Automated documentation supporting the closure of consults. - Enriched Clinical Data: Data contextualized for clinical use and stored for retrieval in Databricks and AWS S3. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-4889 | Artificial Intelligence Assisted Procurement Tools (AIAPT) | a) Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | The Artificial Intelligence Assisted Procurement Tools initiative is designed to address problems within the procurement processes of VA: 1. Workforce Efficiency: Procurement staff face significant workloads and complexity in managing procurement processes, especially with staff shortages and changing environments 2. Inefficiencies in System Integration: Current procurement systems are often fragmented, leading to a lack of seamless integration between various tools and platforms 3. Workflow Automation: Many procurement tasks are manually intensive, leading to delays, errors, and increased workload for procurement staff 4. Policy Compliance: Ensuring adherence to Federal Acquisition Regulation, Veterans Affairs Acquisition Regulation, and other relevant policies can be complex and error-prone 5. Data Management and Analysis: The vast amount of procurement data available is often underutilized due to challenges in managing, analyzing, and extracting actionable insights 6. Governance and Transparency: Maintaining transparency and accountability in procurement processes is challenging, leading to potential issues in governance and stakeholder trust 7. Scalability and Flexibility: Existing procurement systems may lack the flexibility to adapt to evolving needs and scalability to handle increasing workloads 8. Knowledge Management: Inefficient knowledge sharing and management can hinder the effectiveness of procurement processes | Faster and more efficient procurement of medical supplies and services ensures that healthcare providers have timely access to the resources needed for patient care. Reduced Operational Costs: Automation of repetitive and administrative tasks reduces the need for manual labor, resulting in significant cost savings. Minimized Waste and Overhead: Improved forecasting and inventory management prevent over-ordering and reduce waste, further contributing to cost reduction. Faster Procurement Cycles: Automation and streamlined workflows reduce the time required for procurement processes, enabling faster acquisition of goods and services. Enhanced Decision-Making: AI-driven insights and predictive analytics provide procurement staff with data-driven recommendations, improving decision-making efficiency and effectiveness. Reduced Administrative Burden: Automation of routine tasks allows procurement staff to focus on higher-value activities, increasing job satisfaction and reducing burnout. Enhanced Public Trust and Accountability Scalability and Future-Proofing: The modular and customizable AI framework ensures that the VA’s procurement systems can scale and adapt to future needs, providing long-term value and sustainability. By providing these outputs, the AIAPT initiative will significantly enhance the VA’s procurement processes, driving greater efficiencies, improving compliance, and delivering superior services to Veterans. | 1. Automated compliance checks 2. Forecasts for demand planning, inventory management, and budget allocation based on historical data and predictive modeling 3. Reports and dashboards displaying supplier performance metrics such as delivery times, quality ratings, and compliance with contract terms to enable better supplier management 4. Detailed analysis of procurement spend, categorized by supplier, department, and commodity 5. Actionable recommendations for procurement strategies, contract negotiations, and cost reductions based on AI analysis 6. Automated drafting and analysis of procurement documents reduces the time and effort required for document creation 7. Identification and assessment of risks associated with procurement activities, along with suggested mitigation strategies helps proactively manage and mitigate risks 8. Interactive dashboards and visualizations presenting key procurement metrics, trends, and insights enhances visibility into procurement activities 9. Advanced search capabilities that allow users to retrieve relevant procurement information quickly and accurately 10. Reports on the progress and effectiveness of training programs, user adoption rates, and feedback from procurement staff 11. Detailed logs and records of AI system activities By providing these outputs, the AIAPT initiative will significantly enhance the VA’s procurement processes, driving greater efficiencies, improving compliance, and delivering superior services to Veterans. | 1. Automated compliance checks 2. Forecasts for demand planning, inventory management, and budget allocation based on historical data and predictive modeling 3. Reports and dashboards displaying supplier performance metrics such as delivery times, quality ratings, and compliance with contract terms to enable better supplier management 4. Detailed analysis of procurement spend, categorized by supplier, department, and commodity 5. Actionable recommendations for procurement strategies, contract negotiations, and cost reductions based on AI analysis 6. Automated drafting and analysis of procurement documents reduces the time and effort required for document creation 7. Identification and assessment of risks associated with procurement activities, along with suggested mitigation strategies helps proactively manage and mitigate risks 8. Interactive dashboards and visualizations presenting key procurement metrics, trends, and insights enhances visibility into procurement activities 9. Advanced search capabilities that allow users to retrieve relevant procurement information quickly and accurately 10. Reports on the progress and effectiveness of training programs, user adoption rates, and feedback from procurement staff 11. Detailed logs and records of AI system activities By providing these outputs, the AIAPT initiative will significantly enhance the VA’s procurement processes, driving greater efficiencies, improving compliance, and delivering superior services to Veterans. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5048 | SAC AI tools to automate application processing and data reporting | a) Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | Longer response times and missed prioritization: Without automated NLP-driven sorting and predictive routing, incoming requests would sit in general queues or rely on slow manual triage, causing delays in getting issues to the right experts. Urgent or complex inquiries might not be identified quickly, for example, a critical contracting issue could remain unattended until staff notice it in an outlook folder, since no AI is in place to flag or escalate it based on content or sentiment. This increases the risk of SLA breaches on time-sensitive tasks and leads to program office frustration due to slow resolution. Inefficient use of resources and lost insights: In the absence of AI automation, skilled staff must continue spending a large portion of their day on mundane tasks, reading emails, categorizing tickets, data entry, instead of focusing on higher-value work. This manual workload is an inefficient use of human capital and can introduce errors or inconsistencies. Moreover, we would forgo the benefits of AI-driven process mining and analytics that reveal bottlenecks or compliance issues in our processes. Not pursuing this effort means continuing to operate with slower, labor-intensive workflows and missing an opportunity to free up our talent for more mission-critical activities. The SAC would remain reactive and prone to bottlenecks, directly impacting the timeliness and quality of support we provide. | Implementing AI Builder will tangibly improve Veterans’ experience with the services supported by SAC. Veterans and other end-users will see their requests handled and resolved more quickly. E.g., the AI’s natural language processing will immediately categorize its urgency and topic, then route it to the right team or individual. This means shorter wait times and faster answers. A support ticket can be addressed in hours because the system automatically put it in the correct queue and even suggested likely solutions. Higher consistency and proactive improvements: AI Builder’s capabilities ensure that each request is understood in context. If a customer’s message expresses frustration or urgency, the system will recognize that sentiment, and tier level of service to prompt staff to prioritize that case. Additionally, the AI identifies common pain points or frequent questions. The SAC can proactively address these; for instance, if many customers struggle with a certain form, system, or if a particular procurement process causes delays, those insights will drive improvements in training, documentation, or process design. In the long run, this continuous improvement loop means better programs and services for Veterans: fewer errors, more transparent processes, and resources that get to the field faster. The end result is customers and indirectly the Veterans receiving quicker, more reliable, and more responsive support, which translates to higher satisfaction with VA services. | Contracting Audits & Reviews: Automating document analysis and classification in acquisition audit processes. Support Ticketing System: Classifying incoming customer requests by content, intent, systems used, and sentiment, then automatically routing each ticket to the appropriate specialist for faster resolution. Data Analysis & Reporting (with Power BI integration): Analyzing large volumes of contracting and support data to generate insights and feeding those insights into Power BI dashboards and reports. Process Automation: Assisting with internal workflow tasks (e.g. contract reviews and approvals, email triage, form processing) by leveraging AI to reduce manual effort and errors. | Contracting Audits & Reviews: Automating document analysis and classification in acquisition audit processes. Support Ticketing System: Classifying incoming customer requests by content, intent, systems used, and sentiment, then automatically routing each ticket to the appropriate specialist for faster resolution. Data Analysis & Reporting (with Power BI integration): Analyzing large volumes of contracting and support data to generate insights and feeding those insights into Power BI dashboards and reports. Process Automation: Assisting with internal workflow tasks (e.g. contract reviews and approvals, email triage, form processing) by leveraging AI to reduce manual effort and errors. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5428 | Augmented Reality and Virtual Museum | a) Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | AI will enable visitors to submit queries regarding the VA's history and be presented the most suitable information within the VA's curated public domain historical information database. | Utilization of AI will enable the VA to present the most relevant information to the general public with the easiest access, eliminating the need for them to otherwise search the VA Virtual Museum's extensive catalog of publicly available information. It will provide Veterans, employees and the public with accurate information that illustrates the VA's service to veterans. | Outputs include synthesized publicly available information on the VA's history. | Outputs include synthesized publicly available information on the VA's history. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-6488 | AI + Digital Process Automation (DPA) Business Process Flow Improvement Solution for Contract Closeout Modernization – Pilot Project | a) Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | The AI and Digital Process Automation (DPA) solution pilot for the VA's Strategic Acquisition Center aims to modernize the manual and labor-intensive federal acquisition contract closeout process by automating repetitive tasks, enhancing accuracy, ensuring regulatory compliance, and improving oversight. By integrating AI-driven financial reviews and real-time data analytics, the solution aims to reduce errors, shorten closeout timeframes, and optimize resource allocation, ultimately creating a more efficient, accurate, and compliant process in line with Federal Acquisition Regulation (FAR), VA Acquisition Regulation (VAAR), and VA Financial Policy. | The AI and Digital Process Automation (DPA) solution for the VA’s Strategic Acquisition Center will enhance efficiency by automating repetitive tasks, thereby reducing contract closeout times and minimizing errors. This will ensure regulatory compliance with FAR and VAAR, provide cost savings through optimized resource allocation, and improve oversight and transparency. The streamlined process will reduce administrative backlogs and ensure better use of public funds, ultimately supporting the VA's mission by enabling more efficient service delivery and financial management, benefiting both veterans and taxpayers. | The AI system for the VA's Strategic Acquisition Center will produce automated documentation like Contract's Release of Claims, detailed financial review reports, and drafts of de-obligation modifications. It will ensure compliance with FAR and VAAR regulations through dedicated reports and generate real-time notifications for necessary actions. Additionally, performance dashboards will provide transparent metrics on closeout progress. These outputs will streamline the contract closeout process, enhance accuracy, ensure compliance, and optimize resource use, ultimately improving efficiency and transparency. | The AI system for the VA's Strategic Acquisition Center will produce automated documentation like Contract's Release of Claims, detailed financial review reports, and drafts of de-obligation modifications. It will ensure compliance with FAR and VAAR regulations through dedicated reports and generate real-time notifications for necessary actions. Additionally, performance dashboards will provide transparent metrics on closeout progress. These outputs will streamline the contract closeout process, enhance accuracy, ensure compliance, and optimize resource use, ultimately improving efficiency and transparency. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5684 | Federal EHR End User Peer Support Chatbot | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Federal EHR end user support meets peak in the first three months after roll-out. The current model of super user support is not sustainable when EHRM-IO restarts deployment with an accelerated schedule (i.e., 13 sites in 2026). This tool will alleviate the burden on super users by answering the basic, frequently asked questions by a Retrieval-Augmented Generation (RAG) over LLM model, allowing the super users to address the more complex questions. | This use case will lead to increased efficiency, staff satisfaction, enhanced patient outcomes, and help successfully accomplish the VA secretary's strategic goal of Federal EHR deployment | Responses to the user questions related to the use of the Federal EHR. | Responses to the user questions related to the use of the Federal EHR. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-6903 | Policy Knowledge Base | a) Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | VA cybersecurity staff are challenged of managing and staying updated with an extensive array of documentation related to policies, processes, directives, and frameworks, which are essential for maintaining compliance and ensuring the security and privacy of VA information systems. VA RMF program continues to rely on human decision-making for risk-based decisions. This human-centric approach requires extensive involvement from various roles and generation/consumption of large amounts of artifacts and staffing resources. | The AI model can help analysts quickly locate and reference the relevant VA policies to validate compliance and identify non-compliance. | Compliance checklist results with policy references and recommendations. | Compliance checklist results with policy references and recommendations. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-10 | E2 HelpBot | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | EEO Users browse through documents like the User Guides or knowledge articles to understand E2 functionality. Additionally, many Help Desk Tickets are entered for support on E2 functions as well. We plan to develop an internal HelpBot using an LLM model (GPT 4.0) that can assist users to ask the Help Bot questions, reducing user frustration and time to figure out how to perform a task. This use case will provide a knowledgebase to assist users to get information on EEO processes. | Increased Efficiency | Chatbot The Help Bot AI use case will be expanded by developing two distinct products. The first product is a Portal Help Bot, intended for deployment on the EEO portal. This external bot will assist users with inquiry intake and provide responses to questions related to the EEO portal. The second product is an Internal Help Bot, designed to support Office of Resolution Management (ORM) personnel in accessing information. | Chatbot The Help Bot AI use case will be expanded by developing two distinct products. The first product is a Portal Help Bot, intended for deployment on the EEO portal. This external bot will assist users with inquiry intake and provide responses to questions related to the EEO portal. The second product is an Internal Help Bot, designed to support Office of Resolution Management (ORM) personnel in accessing information. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4726 | Compliance Made Easy | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | AI is being added to a new version of Compliance Made Easy (CME) in development to assist with grammar and tone checks along with tracking Style Guide updates. This is intended to improve compliance with VA Style Guide and reduce staffing time for correspondence. | Increased efficiency in developing and processing correspondence. | Once fully developed, the AI version of CME will provide style guide, grammar, and tone checks returned to the user on the text the user submitted to CME. | Once fully developed, the AI version of CME will provide style guide, grammar, and tone checks returned to the user on the text the user submitted to CME. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2553 | ESD-Predictive Intelligence | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Predictive Intelligence, part of the ServiceNow platform, uses artificial intelligence and machine learning to improve the work experience. You can create and train models on the platform and integrate with other ServiceNow products and applications. Specific use cases include identification of Major Incidents more quickly and more accurate routing of tickets. This use case allows earlier detection of major incidents (MTTR). | Earlier detection of major incidents (MTTR). This capability can help with reducing Mean-Time to Restoral and improve IT Operational capabilities. | Dashboard highlighting major incidents as they develop where OIT professionals from Major Incident Management and other OIT stakeholders can review the data for more immediate awareness and informed decision-making. | Dashboard highlighting major incidents as they develop where OIT professionals from Major Incident Management and other OIT stakeholders can review the data for more immediate awareness and informed decision-making. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-257 | Enterprise Precision Scanning and Indexing (EPSI) NEXT | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | The integration of AI into EPSI aims to solve several key problems: reducing the time-consuming and error-prone manual processing and summarization of PDF medical records, and addressing inefficiencies in the current workflow for transferring and integrating documents into both VistA and the VA EHR modernization systems. By automating these processes, including the summarization of documents, AI minimizes human errors and accelerates document handling, leading to faster and more accurate updates to the VA electronic health records. Additionally, AI enhances the scalability and future-proofing of the system. It enables EPSI to efficiently manage increasing volumes of medical records without a proportional increase in resource usage and positions the system to adapt to future technological advancements. This ensures that the VA can maintain high standards of efficiency and accuracy in medical record handling as it transitions to using the modernized VA EHR system. | The integration of AI into EPSI offers numerous benefits for the VA and the general public. Enhanced patient outcomes are achieved through improved accuracy and faster service delivery, as AI minimizes human errors and accelerates the integration of medical records into the VA EHR systems. Cost savings are realized through optimized resource use and more efficient processes, allowing the agency to redirect efforts to other critical areas. Efficiency is increased with streamlined workflows and scalable operations, enabling the system to handle more records seamlessly. Improved data management ensures consistent data quality and better accessibility for healthcare providers, leading to more reliable patient records. Lastly, AI integration future-proofs the system by keeping it adaptable to technological advancements, maintaining efficiency and relevance over time. These benefits collectively support the VA's mission of providing high-quality healthcare to veterans while enhancing operational efficiency. | The AI system integrated into EPSI produces several critical outputs that enhance the processing and management of medical records: Automated Document Handling: The AI system efficiently receives and processes PDF medical records from community care providers, ensuring that each document is correctly identified and categorized. Document Summarization: AI generates concise summaries of medical records, extracting key information and clinical details, making it easier for healthcare providers to review and interpret essential data quickly. Accurate Indexing: The AI system indexes the medical records against the corresponding patient profiles, ensuring that all documents are accurately matched and stored within the VA EHR systems, including both VistA and the modernized VA EHR. Quality Assurance Reports: The AI system produces reports highlighting any inconsistencies or anomalies detected during document processing, allowing for prompt resolution and ensuring high data quality. Scalability Metrics: The AI provides insights and metrics on the system's ability to handle increasing volumes of medical records, ensuring that resource allocation and workflow management can be adjusted as needed to maintain efficiency. These outputs collectively contribute to improving the accuracy, efficiency, and accessibility of patient records, ultimately supporting better clinical decision-making and patient care. | The AI system integrated into EPSI produces several critical outputs that enhance the processing and management of medical records: Automated Document Handling: The AI system efficiently receives and processes PDF medical records from community care providers, ensuring that each document is correctly identified and categorized. Document Summarization: AI generates concise summaries of medical records, extracting key information and clinical details, making it easier for healthcare providers to review and interpret essential data quickly. Accurate Indexing: The AI system indexes the medical records against the corresponding patient profiles, ensuring that all documents are accurately matched and stored within the VA EHR systems, including both VistA and the modernized VA EHR. Quality Assurance Reports: The AI system produces reports highlighting any inconsistencies or anomalies detected during document processing, allowing for prompt resolution and ensuring high data quality. Scalability Metrics: The AI provides insights and metrics on the system's ability to handle increasing volumes of medical records, ensuring that resource allocation and workflow management can be adjusted as needed to maintain efficiency. These outputs collectively contribute to improving the accuracy, efficiency, and accessibility of patient records, ultimately supporting better clinical decision-making and patient care. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4521 | Automated Incident Analysis | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | The Enterprise Command Operations (ECO) Major Incident Management (MIM) team will develop an AI Agent to assist in Major Incident bridge calls. The agent will gather insights from the Major Incident bridge transcription, ServiceNow Incidents, and Observability telemetry to assist in isolating the cause of a Major Incident. The agent would reduce the Mean Time to Repair (MTTR) Major Incidents impacting Veteran care and/or benefits. | This agent would result in improved availability of Veteran-impacting services providing patient care and benefits. It would also significantly improve efficiency as Major Incident bridge calls tie up many system administrators in bridge calls which can be redirected to areas of greater impact. | Documented insights identifying the probable cause of a Major Incident. | Documented insights identifying the probable cause of a Major Incident. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1102 | OpenAI Embedding Generation for Future Vector Search of Banking Data | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The current design of a long term document for a banking partner consists of cloud blob storage which are registered into a database with metadata. A future phase will require full-text document search. We can OCR this information now, and implementing vector embeddings would dramatically help individuals find the specific documents. PNC bank will save documents for 7 years and wants to be able to find specific documents. | Increased efficiency of document retrieval in long term document archive. | An API will be exposed to a secure VA-internal website where appropriate personelle will use the UI, which will make an API call which returns a list of semantic search matches. | An API will be exposed to a secure VA-internal website where appropriate personelle will use the UI, which will make an API call which returns a list of semantic search matches. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1184 | Ask VA Inquiry Automated Category Classification System | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Primary Problem - Manual Categorization Burden: Veterans and their families currently manually navigate complex category, topic/subtopic selection fields when submitting inquiries through AskVA. This creates barriers to accessing VA services and can result in incorrect categorization that delays or misdirects their requests. Scale and Efficiency Problem: The VA processes over 106M inquiry records, with 35.1M completed inquiries requiring manual review and routing. Manual categorization at this volume creates significant processing delays and resource strain on both veterans submitting requests and VA staff managing the workflow. Consistency and Accuracy Problem: Manual categorization leads to inconsistent classification across inquiries, with approximately 11.3M potential misclassifications identified where topics appear under multiple categories. This inconsistency results in inquiries being routed to incorrect service teams, causing delays in resolution and poor customer experience. Resource Allocation Problem: Incorrect or inconsistent categorization prevents optimal allocation of VA staff and resources across the 18 standardized service areas (disability compensation, education benefits, healthcare, housing assistance, etc.), leading to inefficient service delivery. The AI solution mitigates these problems by providing automated, consistent, and accurate categorization that improves both veteran experience and VA operational efficiency. | For Veterans and the General Public: - Improved access to VA services by eliminating complex manual category selection barriers - Enhanced customer experience with streamlined AskVA submission process - Reduced frustration and time burden when seeking VA assistance For VA Agency Mission: - Increased operational efficiency in processing 106+ million inquiry records - Improved resource allocation with accurate categorization enabling better workload distribution - Reduced processing delays and backlogs through automated classification at 83.38% accuracy | The AI system produces a ranked list of the top 3 category predictions with their corresponding confidence scores for each processed AskVA inquiry. Rather than forcing a single classification decision, the system presents the three most likely categories from the 18 standardized VA service areas (including education benefits and work study, disability compensation, healthcare, debt for benefit overpayments and healthcare copay bills, decision reviews and appeals, sign in and technical issues, Veteran Readiness and Employment, survivor benefits, housing assistance and home loans, Veteran ID Card (VIC), burials and memorials, life insurance, Defense Enrollment Eligibility Reporting System (DEERS), pension, benefits issues outside the U.S., Center for Women Veterans, guardianship/custodianship/fiduciary issues, and Center for Minority Veterans) ranked by the model's confidence level. This approach provides flexibility for users who can select from the top 3 AI-suggested categories or, if none of the predictions match their assessment, access the complete list of all 18 categories for manual selection. The system delivers these outputs in real-time through a RESTful API integration, providing JSON-formatted responses with the ranked predictions and confidence scores. | The AI system produces a ranked list of the top 3 category predictions with their corresponding confidence scores for each processed AskVA inquiry. Rather than forcing a single classification decision, the system presents the three most likely categories from the 18 standardized VA service areas (including education benefits and work study, disability compensation, healthcare, debt for benefit overpayments and healthcare copay bills, decision reviews and appeals, sign in and technical issues, Veteran Readiness and Employment, survivor benefits, housing assistance and home loans, Veteran ID Card (VIC), burials and memorials, life insurance, Defense Enrollment Eligibility Reporting System (DEERS), pension, benefits issues outside the U.S., Center for Women Veterans, guardianship/custodianship/fiduciary issues, and Center for Minority Veterans) ranked by the model's confidence level. This approach provides flexibility for users who can select from the top 3 AI-suggested categories or, if none of the predictions match their assessment, access the complete list of all 18 categories for manual selection. The system delivers these outputs in real-time through a RESTful API integration, providing JSON-formatted responses with the ranked predictions and confidence scores. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1225 | Report Anomalous VIEWS CCM User Behavior | a) Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | The AI protects VA sensitive information from insider threat scenarios. Sensitive information includes the unauthorized disclosure of information such as financial records, Veteran information, and employee records stored in the Salesforce application titled "VA Integrated Enterprise Workflow Solution (VIEWS)." AI scans a CRM Analytics database consisting of Salesforce-generated log files to locate patterns of unusual end-user behavior (such as excessive after-hours logins and large, keyword searches (whistleblower, investigation, DD214, discharge, and similar terms) irregular database download attempts) and to automatically report such incidents to an email distribution group for action. | The main benefit of this AI project is to provide improved, early incident detection for protecting VA employee and Veteran sensitive information, including PHI, PII, and Sensitive But Unclassified information stored in VIEWS. | The AI system will provide automated alerts via email about specified security incidents for a distribution list of persons who have governance oversight of VIEWS database information and end-user operations within the Office of the Executive Secretariat. | The AI system will provide automated alerts via email about specified security incidents for a distribution list of persons who have governance oversight of VIEWS database information and end-user operations within the Office of the Executive Secretariat. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-128 | Power Platform Solutioning Chatbot | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | The intention of the tool is to speed up the time taken between users identifying that help is needed for a solution and resolving what would be required to implement the solution as a citizen developer on the Microsoft Power Platform. | The tool will provide a convenient way for citizen developers on the Microsoft Power Platform to be able to ask questions regarding the kind of application or solution they are trying to build and have the chatbot respond with the technologies that can be employed to get the solution completed as well as any licensing details that the user needs to be aware of. The tool is intended to speed up the time between users identifying that help is needed for a solution and resolving what would be required to implement the solution as a citizen developer on the Microsoft Power Platform. This tool would also help identify any potential issues regarding the solution that may need to be handled along the way, like licensing needs. | High level description of the technologies that would be required to implement the provided solution as well as any licensing needs. | High level description of the technologies that would be required to implement the provided solution as well as any licensing needs. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1347 | Tachyon AI | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Tachyon "1E Intelligence" is expected to provide deeper and more meaningful insights into the digital experience of VA employees and customers, including better visibility into the performance of endpoint devices. It is also expected to enhance the Software Asset Management features and increase the effectiveness of User Sentiment gathering. This version will include additional features like “1E Intelligence”, which leverages various types of AI (predictive, causal, generative, cloud, edge, “Digital Twin”) to enhance insights into the digital employee experience, device performance, and software reclamation. | Helps to transform complex data into clear and actionable insights, ensuring digital workplace leaders can make informed decisions that boost uptime, increase operational efficiency, and ultimately result in improved patient care. | Reports that deliver richer visibility into the Digital Employee Experience throughout the VA endpoint fleet | Reports that deliver richer visibility into the Digital Employee Experience throughout the VA endpoint fleet | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3111 | Redseal AI Usage | a) Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | The cyber risk analytics AI is designed to provide enhanced risk prioritization and likelihood of exposure, validating impact through AI orchestrated workflows and automation. | Increased efficiency by reducing incident response guesswork and accelerating remediation. | Outputs consist of a dynamic network map model of our hybrid environment, attack path visualizations showing possible lateral movement routes an attacker could take, segmentation validation reporting to confirm zero trust and Network Access Control (NAC) policies are enforced. In addition, there are outputs related to risk and vulnerabilities, compliance, and workflow automation. | Outputs consist of a dynamic network map model of our hybrid environment, attack path visualizations showing possible lateral movement routes an attacker could take, segmentation validation reporting to confirm zero trust and Network Access Control (NAC) policies are enforced. In addition, there are outputs related to risk and vulnerabilities, compliance, and workflow automation. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3776 | Ingestion of TRM and VASI | a) Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Lack of connection between the Technical Reference Model (TRM) and instances of the technology in use in the TRM. | Ease identification of the use of specific technology at the VA and reduce time spent identifying how and where technology is being used. | Comprehensive considerations for solutions already available at VA and how the technology is being used for a solution. | Comprehensive considerations for solutions already available at VA and how the technology is being used for a solution. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-4253 | Ansible Lightspeed | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Lack of adoption of automation, or expertise in coding knowledge to automate a remediation/code change. Ansible LightSpeed helps create automations faster, making teams more efficient and productive in a shorter amount of time than if they needed to write automations on their own. | Quicker time to service and resolution | Automation playbooks. Ansible LightSpeed answers questions inside the Ansible Automation Platform application via a chatbot dialogue box in the application. Ansible LightSpeed can also be a plugin for Visual Studio Code where it can generate YAML code for a developer (or administrator or engineer) where they are developing the YAML code. Both of these are optional to the included Ansible Automation Platform. | Automation playbooks. Ansible LightSpeed answers questions inside the Ansible Automation Platform application via a chatbot dialogue box in the application. Ansible LightSpeed can also be a plugin for Visual Studio Code where it can generate YAML code for a developer (or administrator or engineer) where they are developing the YAML code. Both of these are optional to the included Ansible Automation Platform. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5472 | VetsEZ Middleware Development | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | The HDM middleware environment supports large-scale, secure health data transport across VA systems, but much of its underlying codebase is legacy (e.g., MUMPS, Java, .NET Framework) and complex, with inconsistent documentation and a heavy reliance on senior SMEs. This leads to: Ongoing technical debt as well-functioning but aging middleware components require careful, phased modernization; Longer onboarding times when new developers need to interpret complex, high-value legacy code without comprehensive documentation; Resource dependencies where mid-tier developers require more direct SME involvement for certain modernization and refactoring tasks; Variation in code patterns that can affect consistency in security, performance, and maintainability across platforms; Slower adoption of modern architectures such as containerized microservices and FHIR integrations. How AI Solves It: Explain legacy code in plain language, creating complete documentation for knowledge preservation and onboarding; Refactor code for maintainability, performance, and security, reducing technical debt; Generate standardized, efficient middleware code aligned with VA modernization goals; Produce AI-augmented unit tests to detect defects earlier (“shift-left” testing) and increase coverage; All outputs will still undergo human review, VA security scanning, and governance to ensure compliance with Federal Information Security Management Act (FISMA) High standards and VA TRM policy. | For the VA’s Mission: Faster code modernization – AI-assisted refactoring and generation will help transition middleware components from older platforms (MUMPS, Java, .NET Framework) to secure, containerized microservices. Improved developer efficiency – By automating repetitive coding & documentation tasks, developers can focus more on high-value modernization work and less on manual rework. Knowledge preservation – AI-generated code explanations and documentation reduce the risk of key-person dependency and improve continuity across teams. Higher code quality – AI-augmented unit tests and standardized patterns will strengthen maintainability and alignment with FISMA High requirements. Accelerated delivery cycles – With AI enabling mid-tier developers to work more independently, releases can move through the DevSecOps pipeline faster while maintaining quality and compliance. For the General Public / Veterans: Improved reliability of VA systems – Modernized, better-tested middleware helps ensure health and benefits data flows securely and consistently. Faster implementation of new services – Reduced development cycle times allow Veterans to benefit sooner from new integrations, such as Fast Healthcare Interoperability Resources (FHIR)-based interoperability with community care providers. Sustained continuity of operations – Documentation and knowledge capture ensure that mission-critical systems can be maintained and improved even as staff transitions occur. | AI System Outputs The system will produce the following outputs to support HDM middleware modernization: Code Explanations & Documentation Plain-language summaries of complex legacy code (e.g., MUMPS, Java, .NET Framework) to aid in developer understanding and onboarding. Structured documentation suitable for inclusion in VA knowledge repositories. Refactored Code Updated, cleaner, and more maintainable versions of existing middleware code that preserve original functionality while improving performance, security, and alignment with modern coding standards. New Code Segments Efficient, standardized code snippets or modules to support integration tasks, modernization efforts, and migration to containerized microservices. AI-Generated Unit Test Scaffolds Automated creation of baseline unit test templates that improve early defect detection, expand test coverage, and support “shift-left” testing practices. Code Review Suggestions Recommendations for improving existing or newly generated code, focusing on maintainability, security, and architectural alignment. | AI System Outputs The system will produce the following outputs to support HDM middleware modernization: Code Explanations & Documentation Plain-language summaries of complex legacy code (e.g., MUMPS, Java, .NET Framework) to aid in developer understanding and onboarding. Structured documentation suitable for inclusion in VA knowledge repositories. Refactored Code Updated, cleaner, and more maintainable versions of existing middleware code that preserve original functionality while improving performance, security, and alignment with modern coding standards. New Code Segments Efficient, standardized code snippets or modules to support integration tasks, modernization efforts, and migration to containerized microservices. AI-Generated Unit Test Scaffolds Automated creation of baseline unit test templates that improve early defect detection, expand test coverage, and support “shift-left” testing practices. Code Review Suggestions Recommendations for improving existing or newly generated code, focusing on maintainability, security, and architectural alignment. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-552 | EDU Synthetic Test Data POC | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | To generate mock data which allows VBA education service to leverage non-prod, non-pii environments for more robust testing. | Faster, automated testing is a key outcome of being able to generate life-like yet identity-obscured data. | Fake Veterans with variables and constraints that mimic real life and can be used for UAT and automated testing. | Fake Veterans with variables and constraints that mimic real life and can be used for UAT and automated testing. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5575 | Pyramid Analytics Generative AI Options | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Pyramid Analytics is a scalable business intelligence application/platform that uses VA data sources for building and maintaining data reporting content. The application/platform has several tools, including dashboard and report creation tools and has 57,000+registered users at VA. | Increased efficiency in finding information faster | AI collects and organizes information, then a person acts on it. The AI tools within Pyramid Analytics produce outputs such as interactive chatbot responses, text and audio prompts, generated scripts, images, infographics, dynamic slide insights, and managed Python and R scripting environments, all of which enhance data reporting and content creation for VA users. | AI collects and organizes information, then a person acts on it. The AI tools within Pyramid Analytics produce outputs such as interactive chatbot responses, text and audio prompts, generated scripts, images, infographics, dynamic slide insights, and managed Python and R scripting environments, all of which enhance data reporting and content creation for VA users. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5657 | MetricSage | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | We intend to build a machine-learning model to optimize cost for the Claim Evidence API within VBMS. The input data will be system metrics collected by Dynatrace, such as CPU and memory usage, and the output will be projected usage and optimal sizing of resources based on that usage. VA Project Managers and Benefits Integrated Platform engineers will use this system to determine the resources to provision and the expected cost of those resources. This use case will be used to anticipate future cost in compute based on claim workload. | proper forecasting will allows us to right size our compute resources. | A dashboard that shows present and anticipated future compute utilization. | A dashboard that shows present and anticipated future compute utilization. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5958 | Integration of Provar Manager with VA GPT | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Streamline the creation of test cases. This approach saves time, reduces manual effort, and ensures that testing is both efficient and aligned with project's goals. | Seamless testing of Customer Relationship Management (CRM) applications developed in Salesforce that support Department of Veteran Affairs services. Reducing time to deliver and ensuring accurate match of the requirements vs deliverables. | Successful integration of VAGPT and automated generation of test cases. | Successful integration of VAGPT and automated generation of test cases. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-605 | BillieGPT POC | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | It will exist to assist School employees (School certifying officials) in general use, policy, and procedures for handling Veterans cases from the school's perspective. Gives info directly to school officials rather than having them call in to the VA call centers for assistance. | Better outcomes for student veterans, increased efficiency | Generative AI response to the user question | Generative AI response to the user question | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-6170 | ECC Automated Knowledge Management | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Currently, teams manually review and update Knowledge Articles, often without clear prioritization. This leads to rework, inefficiencies, and missed opportunities to align knowledge with real-time service changes. An AI-driven approach will streamline this process and ensure knowledge is kept current and relevant. | This initiative is expected to improve efficiency by 20% in the Minimum Viable Product (MVP), and grow as the agent creates and modifies knowledge. | The MVP is planned in 2 phases with different outputs. The 1st phase of the MVP is to produce a prioritized list of knowledge articles for the Integration and Sustainment team to modify based on inputs from multiple data sources. The 2nd phase of the MVP will output a complete knowledge article that will be reviewed by a person for accuracy and completeness. | The MVP is planned in 2 phases with different outputs. The 1st phase of the MVP is to produce a prioritized list of knowledge articles for the Integration and Sustainment team to modify based on inputs from multiple data sources. The 2nd phase of the MVP will output a complete knowledge article that will be reviewed by a person for accuracy and completeness. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-6585 | Custom GPT for Network Operations and Automation (Wide Area Network) | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | This solution will be able to: 1. Information Retrieval: Query our router database and backup configurations to answer questions about network topology, device status, and configuration details 2. Configuration Generation: Assist in building network device configurations based on our standards and requirements 3. Troubleshooting Support: Help diagnose network outages by analyzing device data and providing guided troubleshooting steps 4. Read-Only Device Access: Eventually integrate with our network infrastructure to gather real-time information from devices in a secure, read-only manner It will solve the problem of reducing the time required to resolve network issues, human errors in configuration, availability of network support. | Benefits: • Reduced mean time to resolution for network issues • Standardized configuration generation reducing human error • Enhanced knowledge sharing across the WAN team • 24/7 availability for network operations support • Improved training resource for new team members | Router configurations, Troubleshooting information, information pertaining to our devices, security compliance checks, answers to leaderships questions, training information to new employees. | Router configurations, Troubleshooting information, information pertaining to our devices, security compliance checks, answers to leaderships questions, training information to new employees. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1360 | Potential Fraud or Waste | a) Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Quickly develop predictive analytics based on historical values captured and assessed within the Data Analytics Service (DAS) Dashboard. The dashboards allow various purchase card managers to monitor purchase card spending of employees. The dashboards help monitor for fraud, waste, and abuse, and assist in providing oversight for compliance with purchase card laws and policies. We have now incorporated advanced data analytics within the dashboard. The Advanced data Analytics model generates an expected received date for items. It then flags items that have gone beyond the expected generated date by flagging items that were received late (possible waste) and/or items that have not been received yet (possible fraud). This has the potential to save the analyst time by investigating cases that have been flagged as opposed to investigating potentially an entire dataset, which is what they have been doing up until now. The model analyzes all purchase card transactions including ordered, delivery, and received dates as its inputs. It outputs expected received dates. The intended users of the product are purchase card managers. | - Delivers high value to Customers, Partners, and Stakeholders by enhancing user efficiency and analytical capacity. - Customers receive quick responses to data analysis questions, - Customers can communicate requests to the Chatbot using natural langue format, - Provides quick and accurate answers to inquiries - Provides 24/7 support - Reduces support team emails - Provides support for ad hoc data analytics requests - Improves scalability to support customers. | Predictive analytics based on historical values captured and assessed within the Data Analytics Service (DAS) Dashboard. | Predictive analytics based on historical values captured and assessed within the Data Analytics Service (DAS) Dashboard. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5493 | iFAMS CSR Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | The pilot deployment will simplify retrieval of knowledge content to resolve user support requests and enable the customer support representative (CSR) to confirm accuracy of steps presented to improve knowledge content. It will be internal facing in pilot to assess usability, efficiency, and correctness with CSR representatives. | Improved customer satisfaction, cost efficiency, consistency of communication, reduced human error. | Text responses, interactive elements, information retrieval, link sharing, feedback collection. | Text responses, interactive elements, information retrieval, link sharing, feedback collection. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1348 | Smart Claim Check | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Smart Claim Check (SCC) reduces avoidable claims processing errors by addressing the problem of manually reviewing extensive documentation by Veterans Service Representatives (VSRs) to identify and resolve issues. Claims processing errors that can be avoided in the claims process leads to extended wait times for Veterans to receive the benefits to which they are entitled. SCC leverages natural language processing (NLP) and machine learning (ML) to analyze past deferral data and identify common patterns and themes. | By leveraging these technologies described in #12, Smart Claim Check can automatically review claim documentation, identify common patterns, and offer structured recommendations, including context-specific guidance from the M21-1 manual, confidence scores, and links to relevant sections, thereby expediting the claims review process and reducing avoidable re-work and extended processing times. | The output of the AI system is timely guidance with relevant M21-1 linkage, that will appear in the User Interface (UI) automatically for the claims processors to review and act upon. SCC is referring to these a "Smart Claim Insights". | The output of the AI system is timely guidance with relevant M21-1 linkage, that will appear in the User Interface (UI) automatically for the claims processors to review and act upon. SCC is referring to these a "Smart Claim Insights". | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1512 | Loan Guaranty Lender’s Handbook Chatbot | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Reinforcement Learning: AI trained through trial and error using rewards and penalties to optimize decision-making policies. | Loan Guaranty Service (LGY) has many policy and procedural documents that are difficult to know and retrieve information for. This AI tool is designed to make the knowledge retrieval process more efficient and improve consistency. | Increase efficiency and consistency in knowledge retrieval from source documents (manuals, SOPs, other policy & procedural documents). | Curated information (answers) to user questions based on LGY policy and procedures. | Curated information (answers) to user questions based on LGY policy and procedures. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2036 | Education Call Center (ECC) Next Gen POC | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | To give easier access to information like approved, current procedures or helpful tips to call center employees that engage directly with Veterans. This is a high turnover position within VA so this would lighten the training barrier and supply our Veterans with better information. | Increased efficiency and improved communication outcomes with Veterans about their benefits. Perhaps improve turnover at the call centers. | • Function of the model: Generative AI Model that can provide real time transcription of calls, capture questions asked by caller, and provide recommended answers with source citation to ECC representatives • Output: Call transcript, answers to questions, querable database for quality control's purpose | • Function of the model: Generative AI Model that can provide real time transcription of calls, capture questions asked by caller, and provide recommended answers with source citation to ECC representatives • Output: Call transcript, answers to questions, querable database for quality control's purpose | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2089 | AI Personal Assistance POC | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Help VBA Education Data Analysts create queries and interpret data which can be used for decision making. | Increased efficiency | Answers to questions. The tool would generate a mock SQL query and other useful outputs that would help refine the data inquiry. | Answers to questions. The tool would generate a mock SQL query and other useful outputs that would help refine the data inquiry. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2373 | Generative Annotations | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Veteran Service Representatives (VSRs) are required to annotate documents relevant to a Veteran claim so that subsequent reviewers are able to quickly reference information to make claim decisions. Currently in order to annotate documents, VSRs have to read large bodies of text and manually summarize them which is time consuming and increases cognitive load. | Increased efficiency | AI Summarized text | AI Summarized text | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3670 | ASKA: Automated Support and Knowledge Assistance | a) Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | ASKA is an internal support tool for VBA Public Contact staff. It connects approved policy and procedural content from SharePoint to a searchable question and answer bot. Staff can ask common questions about public contact, FOIA, congressional inquiries, and other customer service duties. ASKA does not use or process PII, claims data, or make any automated decisions, it simply returns standard reference answers an directs staff to relevant SOP or job aids. We are currently using many different websites to obtain information which delays services to the public, Veterans and Service members. Having a tool available to provide this information will help provide this service at a faster rate. | Increase efficiency and productivity for public services | We are currently looking into building or utilizing what is available to create this. We are looking at a space where all info is extracted and easy to find | We are currently looking into building or utilizing what is available to create this. We are looking at a space where all info is extracted and easy to find | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5852 | VBA Mail Management Services (MMS) - Modification | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | We are shifting data capture from back-office processing to the point of first contact. The system validates VA forms in front of the user interactively as it is being submitted, allowing the submitter to verify and correct errors immediately (reduces churn). This will help with claims modernization. | To prevent claims processing inefficiencies in data quality at intake. | Upgrade to QuickSubmit to include a forms validation engine. | Upgrade to QuickSubmit to include a forms validation engine. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1975 | GenAI for KM | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | To provide agents/users content insight capabilities beyond simple keyword search currently available and improve response time & quality and incorporate updated source content in near real-time in future answers. | Improve response time & quality information provided to customers | Generated content would vary in nature depending on specific processes/cases it would be deployed for, incl. synthesis, summarization, type/topic identification among others. | Generated content would vary in nature depending on specific processes/cases it would be deployed for, incl. synthesis, summarization, type/topic identification among others. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1647 | PINGOO.AI | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Enhanced patient health literacy through education and patient engagement. | A more educated patient population and improved health literacy as well as reduced physician burden. | Direct patient education through a RAG model. | Direct patient education through a RAG model. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1729 | Digizens | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Patient compliance with treatment plans. Solves the problem of costly clinical trials modeling behavior. Digizens is an AI that creates a digital cohort of individuals who can then take part in clinical trials related to compliance with certain tasks. Various messaging can be relayed to the digital individuals in order to measure effectiveness of messaging in real world scenarios without the risk of harm to real individuals. Population based behavior modeling based on communication, messaging, and compliance. These models can be run prior to engaging actual participants in various studies to optimize adherence and outcomes. | Improved patient compliance with more effective messaging. Lows risk cost effective clinical trials related to patient behavior. | Output is modeled behavior and compliance rates based on messaging. | Output is modeled behavior and compliance rates based on messaging. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2549 | Sentiment analysis for app feedback | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Analyzing and summarizing feedback provided by patients and providers for VA Mobile Apps. Identifies trends and makes inferences of "App Feedback" data to provide a scalable solution. | Reducing working hours and providing VA app developers with user feedback analysis and trends. | Major topic classifications of given output. | Major topic classifications of given output. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2910 | BlueTeam AI | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Ensure trustworthy and governed GenAI and AI modeling through collaboration with the VA Innovation Unit on monitoring network for LLM traffic and other AI-indicated data and routing the data through a rules engine. | BlueTeam AI enables network monitoring capabilities to provide governance and guardrails for LLM Generative AI use for the enterprise. | The system outputs only approved data and data categories to LLM as well as outputs a return notification to the user for any violations. | The system outputs only approved data and data categories to LLM as well as outputs a return notification to the user for any violations. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3775 | BessISSTANT | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Decrease time needed for technicians to find information in service manuals. | Increased efficiency. | Excerpts from service manuals. | Excerpts from service manuals. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3783 | Clinician-Administered PTSD Scale for DSM-5 (CAPS-5) Clinician Training Simulator | a) Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | The level of effort required to train clinicians, researchers, and examiners to administer and score the Clinician-Administered PTSD Scale for DSM-5. The CAPS-5 Clinician Training Simulator comprises three online courses, available in TMS and TRAIN, that provide instruction in the administration and scoring of the CAPS-5. The courses use voice recognition, 3-D technology, and artificial intelligence to create an immersive learning experience. The courses are intended for clinicians, trainees, researchers, and examiners in VA and in the community. They use AI in the following ways: 1. In the three current CAPS-5 virtual patient courses, learners have the option to speak the prompts, with their spoken prompts transformed to text using speech recognition. 2. The learners’ prompts, whether spoken or typed, are programmatically compared to the required prompts in the CAPS-5. The system analyzes whether the prompts were delivered exactly (“verbatim”), with slight variations (“paraphrased”), or with unacceptable variations (“off-script”). 3. One of the courses analyzes learner prompts in order to trigger comments –corrections or praise -- from a virtual coach. 4. We’re currently revising one of the courses to take better advantage of sematic similarity to analyze learners' inputs. 5. In the future, we may deploy generative AI. Here’s how the vendor described it in their response to our solicitation: “This new design will also leverage a separate backend to handle AI multi-agency and retrieval augmented generation (RAG) to ensure large language models (LLMs) used will always operate as designed.” This will be a closed system that does not pull information from or input information to publicly-available platforms (e.g., GPT-4, Bard). | Saving time and human resources that can be better deployed in patient care. Improving accuracy of diagnosis of PTSD in Veterans. | Courses in TMS and TRAIN. AI assists learners in understanding how to accurately administer and score the CAPS-5. Courses are helpful in providing clinical care but are not required, nor is it a requirement for clinicians to use the CAPS-5 with their patients. | Courses in TMS and TRAIN. AI assists learners in understanding how to accurately administer and score the CAPS-5. Courses are helpful in providing clinical care but are not required, nor is it a requirement for clinicians to use the CAPS-5 with their patients. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3939 | Policy AId | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Policy lookup is a time-consuming task in the VA. This challenge was identified as an ideal candidate for generative AI product development. | The AI model would be producing text and relevant material relating to the medical center's policy in relation to the user's query. | The AI model would be granting easier access to local governance for users. Users are still responsible for decision-making/actions based on their respective roles. | The AI model would be granting easier access to local governance for users. Users are still responsible for decision-making/actions based on their respective roles. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4513 | Clinical Pathways for Cancer Patients | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | The VA National Oncology Program (NOP) has invested in creating clinical pathways that guide providers to provide high quality evidence-based standard of care treatments to Veterans with cancer. However, it is currently unclear how many or how often providers are ‘on pathway’ or are concordant with this recommendation. The work will focus on leveraging AI to assess the degree of adoption of various pathways. For example, the VA NOP proposes to develop, evaluate, and implement an informatics workflow that is capable of extracting information regarding whether a Veteran with prostate cancer has been offered access to germline and somatic genetic testing. This will assess care quality across the VA in terms of genetic testing in Veterans with prostate cancer and provide the necessary information to engage in quality improvement efforts. The work will follow the framework NOP developed to assess germline genetic testing in Veterans with breast cancer, in response to the Advances in Mammography and Medical Options (MAMMO) for Veterans Act of 2022. Briefly, Veterans receiving care for prostate cancer in the VA will be identified using electronic health information in the VA Corporate Data Warehouse and their latest relevant clinical notes will be retrieved. | Increase efficiency: it will speed up providers' review of patients' treatment plans based on the National Oncology Program guideline, since patients' pathway information will be highlighted. This makes it easier for the providers to make better treatment decisions for the patient, therefore potentially improving patient outcomes. | Whether a patient is on a clinical pathway and where on the clinical pathway they are | Whether a patient is on a clinical pathway and where on the clinical pathway they are | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4640 | Glassbeam Clinsights | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The system medical device data and then uses AI to predict medical device failure. So it uses wear and tear data to determine if the components are failing to prevent patient harm. | It expects to detect medical device component wear and reports it to HTM to give the tech the knowledge that a part is failing so that it can be replaced before it wears out or breaks. | This system takes imperial data from manufactures manual and then exports that data to a web interface where is displays the health of the device and it's components and displays any failures that is captures as well. | This system takes imperial data from manufactures manual and then exports that data to a web interface where is displays the health of the device and it's components and displays any failures that is captures as well. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5054 | Surveillance and Reporting of Suicidal Ideation Assessment in PTSD Specialty Care Clinical Notes using Natural Language Processing (NLP) | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | In light of the known elevated suicide risk among individuals diagnosed and in treatment for mental health conditions, and the associated need for ongoing monitoring of suicidality in this population, we are proposing to surveil and monitor nationally both the conduct and quality of suicide ideation (SI) assessment during treatment in PTSD specialty care. Our methods entail the use of natural language processing (NLP) of progress note documentation of treatment being conducted in the PTSD Clinical Team (PCT) specialty care clinics, to identify SI mentions, to quantify consistency of documented SI assessments by providers, and to classify SI mentions with respect to their clinical quality. We will report SI assessment rates nationally with break downs by VISN and facility to support administrative monitoring. This use case will help to identify rates of failure of mental health providers to conduct or document ongoing suicide risk screening throughout the conduct of mental health treatment for PTSD. | Enhance VA system monitoring of suicide risk screening in a population at increased risk for suicide | Surveillance rates of absent treatment documentation for recommended suicide risk screening at the national, VISN, and facility levels | Surveillance rates of absent treatment documentation for recommended suicide risk screening at the national, VISN, and facility levels | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5136 | Using AI to read PDFs and parse the information to SharePoint list | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Deviation forms are submitted via email, the AI will grab the pre selected PDF information and parse the data into a SharePoint List. Due to the size of the files received the AI will save hours off the Analysts workload. The problem to be solved is parsing the Party administrators contractual deviations from the submitted PDF forms. | Increased efficiency | The data will be parsed to a local SharePoint list. | The data will be parsed to a local SharePoint list. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1029 | Pathology and Laboratory Medicine SharePoint Chatbot - Quincy | a) Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The current SharePoint page has a tremendous amount of information. Using the Chatbot helps the field search for information without having to contact the office directly. | The chatbot will increase efficiency of time it takes for individuals to find important information and free up program staff to concentrate on quality and safety issues for the clinical laboratory care of the Veteran. | If AI is approved, it will allow the Chatbot to look outside the PLM SharePoint page for information on such sites as the Center for Medicare and Medicaid site. | If AI is approved, it will allow the Chatbot to look outside the PLM SharePoint page for information on such sites as the Center for Medicare and Medicaid site. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1718 | SIM VOX for training nursing scenarios | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | AI is intended for teaching and clinical training to improve nursing clinical decisions by putting them through a simulated scenario that they may encounter in real life. SIM VOX is an AI Software used to for simulation scenarios on nursuing competencies with mannequins. | Improving clinical and patient outcomes. | Nursing and clinical staff improvements. | Nursing and clinical staff improvements. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2248 | BESSistant | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | This use case will cut down the amount of time it takes an individual to search manuals for preventative maintenance and troubleshooting instructions - the AI chatbot returns answers to queries from uploaded maintenance documentation. | Increased Efficiency | The application will use retrieval augmented generation (RAG) to query a set of technical service manuals | The application will use retrieval augmented generation (RAG) to query a set of technical service manuals | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2354 | SCORE (Structured Clinical Output Rating Engine) | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | SCORE (Structured Clinical Output Rating Engine) uses o3-mini LLM to grade the quality of clinical notes that have either been created by a human or AI and provides feedback to the user. When a transcript of the patient encounter is available, SCORE employs embedding-based semantic analysis to enhance patient safety and documentation accuracy. Embeddings convert text into numerical vectors that capture meaning, enabling the system to measure how semantically similar—or different—statements are between the clinical note and the source transcript. This capability powers two critical safety detectors: Hallucination Detection identifies factual-sounding claims in the note that lack support in the transcript. This is essential for AI-generated documentation, where models may produce plausible but ungrounded statements about medications, allergies, diagnoses, or procedures. Claims with weak transcript support are flagged with risk levels (high/medium/low) based on their potential clinical impact. Contradiction Detection finds instances where the note and transcript discuss the same clinical topic but present conflicting information—such as differing medication dosages, mismatched vital signs, or inverted symptom reports (e.g., "denies chest pain" versus "reports chest pain"). | Improve clinical documentation quality through standardized evaluation and scoring and create efficiency for documentation audit staff. | Output: Quality Score & Actionable Guidance Returns a hybrid quality score (1.0–5.0) and letter grade computed from three weighted components: PDQI-9 Analysis (70%) – AI-driven evaluation of 9 clinical documentation dimensions (up-to-date, accurate, thorough, useful, organized, concise, consistent, complete, actionable) Heuristic Metrics (20%) – Rule-based assessment of length, structure, and redundancy Factuality Verification (10%) – Claim validation against source transcript (when available) Each dimension includes narrative feedback, supporting evidence excerpts from the note, and specific improvement suggestions to guide documentation enhancement. | Output: Quality Score & Actionable Guidance Returns a hybrid quality score (1.0–5.0) and letter grade computed from three weighted components: PDQI-9 Analysis (70%) – AI-driven evaluation of 9 clinical documentation dimensions (up-to-date, accurate, thorough, useful, organized, concise, consistent, complete, actionable) Heuristic Metrics (20%) – Rule-based assessment of length, structure, and redundancy Factuality Verification (10%) – Claim validation against source transcript (when available) Each dimension includes narrative feedback, supporting evidence excerpts from the note, and specific improvement suggestions to guide documentation enhancement. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2566 | TherapyTrainer | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Simulation and feedback for training in evidence-based psychotherapies. This technology is intended to allow therapists to have just-in-time support and feedback when learning a new treatment. Problem to be solved: Problems scaling training; there are limited available experts to provide consultation and feedback. | more rapid training and skill acquisition; more therapists available to provide high quality evidence-based care. | AI consultant will provide feedback and suggestions for how to engage in therapeutic interactions with a simulated patient agent; AI simulated patient will simulate patient behavior in a therapy session; AI fidelity monitor will provide the AI consultant with assessment of therapist skill; AI safety agent will detect whether therapist response to simulated patient risk is appropriate. | AI consultant will provide feedback and suggestions for how to engage in therapeutic interactions with a simulated patient agent; AI simulated patient will simulate patient behavior in a therapy session; AI fidelity monitor will provide the AI consultant with assessment of therapist skill; AI safety agent will detect whether therapist response to simulated patient risk is appropriate. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2578 | Internal QMS Processes Revamp | a) Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | 1. Planning & Restructuring Assistant: Utilize GPT-4x (or more advanced available models with reasoning capabilities) to analyze current QMS structures, identify optimization opportunities, and generate actionable restructuring plans. 2. Document Classification System: Implement GPT-4x with structured output to automatically classify, tag, and organize QMS documentation based on content, purpose, and relevance. 3. Code Development Support: Assist developers with code generation, optimization. 4. VBECS Knowledge Base: Build a Retrieval Augmented Generation (RAG) system using embeddings to create a searchable repository of VBECS system knowledge. 5. Intelligent RAG Access Agent: Create an agentic interface using GPT-4x that can intelligently navigate the VBECS knowledge base to retrieve precise information and answer complex queries. AI does not directly output SOPs. Instead, it provides information and guidance to assist a human in drafting the SOPs. The final output is created by a person based on the tool’s suggestions and support. | Assists us with meeting regulatory deadlines in February 2026. | Internal Documentations such as SOP (Standard Operating Procedures) | Internal Documentations such as SOP (Standard Operating Procedures) | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3255 | Using LLMs to assist with data extraction | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | We are investigating the ability of Large Language Models (LLMs) to automate the extraction of discrete data elements from clinical notes. The National Radiation Oncology Program (NROP) office is using Quality Measures to evaluate the quality of care to veterans receiving radiotherapy inside of the VA and in the community. While a much value was found in the data collected in prior projects, the manual data extraction process was expensive, time consuming and still error prone. The purpose of this work is to determine the feasibility of using modern AI technology to reduce the burden of collecting this information from clinical notes. AI will be used to automate data extraction from free text notes which is not possible at scale because of the cost and the level of human labor required. | Extracting this discrete data will help to track changes in treatment quality over time, identify care gaps and provide data which could later be used for building predictive models to improve care and become a resource for VA researchers. | The primary output of this system will be discrete data found in free text clinical notes. | The primary output of this system will be discrete data found in free text clinical notes. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3944 | Pre-COVA | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Incorrectly or unmatched products are not automatically migrated to the new federal EHR upon go-live, requiring facility staff to manually review and enter to prescriptions to be available for Veterans to request. This would improve the efficiency and accuracy of that matching. | It is intended to improve product matching efficiency and accuracy, improving patient safety and availability of prescriptions post Federal EHR go-live. | Prioritized / directed matching of Legacy VISTA Prescription products to Federal EHR products to facilitate conversion. | Prioritized / directed matching of Legacy VISTA Prescription products to Federal EHR products to facilitate conversion. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-4094 | PestScore VA: AI-Driven Pest Risk Forecasting | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | VA facilities currently rely on fragmented, inconsistent pest control documentation methods, ranging from hand written logs to variable digital records. This make it difficult to detect early warning signs, track pest trends or ensure system-wide compliance with IPM standards. Additionally, leadership lacks real-time visibility into pest activity across departments or sites. PestScore VA solves this by creating a standardized AI enhanced system that enables predictive insights and uniform pest risk documentation, enhancing transparency, documentation, and proactive care. | 1. Predictive Insight: The AI forecasts pest activity using hybrid logic (trends plus probability), helping teams act before outbreaks escalate. 2. Standardized Reporting: Uniform, clean reports help facilities meet regulatory and environmental standards more easily. 3. Operational Efficiency: Technicians enter familiar data (notes, sightings), and AI handles analysis, saving time without increasing workload. 4. Environmental Health: Supports less toxic, preventive pest strategies that align with patient safety and VA sustainability goals. 5. Leadership Visibility: Allows decision makers to compare zones or hospitals at a glance, helping allocate resources intelligently. | 1. Risk Scores: Each zone receives a dynamic PestScore (0-100) indicating current and future pest pressure based on conditions, sighting, and predictive logic. 2. Intelligent Summaries: AI writes plain language summaries explaining likely pest causes, routes, and risk drivers, tailored to non-expert readers. 3. Follow-Up recommendations: PestScore suggests reinspection timelines based on severity, location type, and pest species behavior. 4. PDF reports: Branded, timestamped reports are generated instantly, suitable for leadership review, audits, or historical tracking. | 1. Risk Scores: Each zone receives a dynamic PestScore (0-100) indicating current and future pest pressure based on conditions, sighting, and predictive logic. 2. Intelligent Summaries: AI writes plain language summaries explaining likely pest causes, routes, and risk drivers, tailored to non-expert readers. 3. Follow-Up recommendations: PestScore suggests reinspection timelines based on severity, location type, and pest species behavior. 4. PDF reports: Branded, timestamped reports are generated instantly, suitable for leadership review, audits, or historical tracking. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-4474 | Image Quality Control Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Currently, patients submit images to My VA Images, then a provider reviews the images. If a patient submits a low quality image, they must await a response from the provider, who must then request that the patient submit another image. This causes unnecessary delays in patients receiving care regarding their dermatological conditions. If the image is low quality (blurry, too light, too dark, too far, too close, not centered, or distracting background), this model will prompt patients to submit another image immediately following their submission. | Increased efficiency due to immediate review and qualification of images submitted to My VA Images, simplifying the process for both patients and providers. Improved relations with patients and lower patient frustration with the overall process due to the simplified process. | Patients will receive a notification in My VA Images if the image they submitted is low quality (blurry, too light, too dark, too far, too close, not centered, or distracting background). | Patients will receive a notification in My VA Images if the image they submitted is low quality (blurry, too light, too dark, too far, too close, not centered, or distracting background). | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-4739 | AI for Monitoring Veterans' Diagnoses | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Providing Timely care to veterans. | This AI tool will ensure timely care for veterans especially when there's a new diagnosis for a chronic condition, and it needs much improved coordination of care between facilities and community care. This will reduce readmissions, hospitalizations and enhance quality of care and life expectancy for veterans. This will save VA in millions of dollars spent in ER care, acute care, readmissions and hospitalizations. | The AI tool will update a dashboard for monitoring diagnosis and timely relay of results back to providers. This AI application will be a monitoring tool for providers and leadership to track timely care for veterans in their facility. | The AI tool will update a dashboard for monitoring diagnosis and timely relay of results back to providers. This AI application will be a monitoring tool for providers and leadership to track timely care for veterans in their facility. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5004 | Automated AI review for Competitive Procedures | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Provide guidance, while maintaining human in the loop, for various aspects of the reviewing processes where competitive procedures include multiple objective and subjective metrics (with potential application to R & D grants and contracts). This enables faster processes, unbiased and additional perspectives for competitive review processes. | Provides cost savings, increased efficiency, and process enhancements | Suggestions for human reviewers for evaluation on comprehensiveness of submission (e.g. Research and Development grant) | Suggestions for human reviewers for evaluation on comprehensiveness of submission (e.g. Research and Development grant) | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5737 | Healing in Your Hands Wellness Tools for VA Staff and Veterans | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | This AI wellness tool is designed to close a real gap in the VA: the lack of quick, easy-to-access emotional support for both staff and Veterans in the moments they need it most. Right now, if a nurse finishes a high-stress call or a Veteran walks into the clinic feeling anxious, there’s no immediate, built-in way to help them calm down, refocus, or reset before moving forward. Most wellness efforts happen after the fact—or are generic resources that people have to go looking for. This project changes that by putting wellness directly in their hands—through a simple QR code they can scan anywhere in the building. From there, they can instantly access calming tools like breathing exercises, affirmations, or therapeutic coloring pages—customized for staff or Veterans—without logging in, downloading an app, or navigating a complex system. It solves the problem of accessibility, timing, and relevance—giving people the right kind of support exactly when and where they need it. | To develop a set of interactive, AI-supported wellness tools that can be deployed across VA facilities and remotely, enhancing daily emotional health, reducing burnout, and supporting mental wellness for both staff and veteran populations. | 1. Custom Wellness Exercises Breathing techniques tailored to the moment (e.g., quick 1-minute reset, 3-minute calming session, guided deep breathing for stress release) 2. Personalized Affirmations & Encouragement Short, uplifting messages generated based on user type (staff or Veteran) and emotional need (e.g., motivation, calm, grounding) 3. Therapeutic Coloring Page Designs Military-themed or wellness-themed images staff and Veterans can print or color digitally, tailored to user preferences 4. Mindset Reset Prompts Quick mental focus or grounding exercises for moments of overwhelm, before a shift, or before an appointment 5. Quick-Access QR Links AI creates a shareable, scannable QR link to instantly access the right wellness resource without login or app downloads | 1. Custom Wellness Exercises Breathing techniques tailored to the moment (e.g., quick 1-minute reset, 3-minute calming session, guided deep breathing for stress release) 2. Personalized Affirmations & Encouragement Short, uplifting messages generated based on user type (staff or Veteran) and emotional need (e.g., motivation, calm, grounding) 3. Therapeutic Coloring Page Designs Military-themed or wellness-themed images staff and Veterans can print or color digitally, tailored to user preferences 4. Mindset Reset Prompts Quick mental focus or grounding exercises for moments of overwhelm, before a shift, or before an appointment 5. Quick-Access QR Links AI creates a shareable, scannable QR link to instantly access the right wellness resource without login or app downloads | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5799 | VA Health Connect (VAHC) Customer Relationship Management (CRM) - contact quality analysis | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | There are very few tools available to evaluate the content of a Clinical Contact Center (CCC) Contact to provide insight into the purpose, tenor of the call, satisfaction of the Veteran, and identify Medical Support Assistant (MSA) training opportunities. AI will provide a window into the overall quality of the call based on the purpose, content, and resolution of the contact. | Improved insight into the quality of cases received by the CCC to identify training opportunities, trends, identify opportunities to provide Veteran self-service options. | Contact quality metrics, data for additional analysis to identify trends, training opportunities, and self-service options. | Contact quality metrics, data for additional analysis to identify trends, training opportunities, and self-service options. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-7698 | Laboratory Procedure AI Search Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Artificial intelligence is set to reduce the time needed to find a procedure and, within that procedure, an answer to a posed question. | The use of artificial intelligence for contextual searches of our procedures boosts efficiency. This increased efficiency indirectly enhances patient outcomes by reducing downtime and shortening the laboratory decision-making process. | Artificial intelligence will provide answers to questions posed by Clinical Laboratory Scientists. Additionally, the AI will link to the relevant procedures, ensuring that scientists know which procedures to reference in the future. | Artificial intelligence will provide answers to questions posed by Clinical Laboratory Scientists. Additionally, the AI will link to the relevant procedures, ensuring that scientists know which procedures to reference in the future. | ||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-708 | Abridge Ambient Scribe | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | a) High-impact | High-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | This solution is intended to solve the following problems: - Clinician burnout by decreasing clinical documentation burden which is resulting in - Clinicians focusing on their computer rather than the Veteran during the visit by reducing clinical documentation during patient visits - Poor quality and unstandardized clinical documentation | - Reduced clinician burnout - Improved Veteran satisfaction with healthcare experiences - Future: Improved billing/coding which can provide financial benefit to the VA | Draft clinical notes based on audio from clinical encounters. | Draft clinical notes based on audio from clinical encounters. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-6426 | Knowtex Ambient Scribe | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | a) High-impact | High-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | This solution intends to reduce clinician burnout and improve documentation quality. | Reduced clinician burnout, increased efficiency, improved documentation quality | Draft clinical notes for clinician review | Draft clinical notes for clinician review | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-6479 | Secure Access Service Edge (SASE) | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Cybersecurity | Pilot | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The AI is intended to identify, prevent, and mitigate risks from unauthorized data access, exfiltration, and misuse of sensitive information, including PHI/PII in cloud applications and network traffic. It addresses gaps in visibility, compliance enforcement, and threat detection across a hybrid work environment. | - Improved data security by automatically detecting and alerting/blocking risky or non-compliant data transfers in real time. - Operational efficiency by automating policy enforcement and reducing manual review workload. - Enhance compliance with federal mandates by monitoring PHI/PII across all access points, thereby maintaining trust with Veterans and the public through stronger protection of sensitive information. | The AI produces actionable security alerts, risk scores, and policy compliance reports that identify potential exposure of PHI/PII. Outputs include prioritized incidents, visualized trends in application usage, anomaly detection summaries, and automated compliance audit logs to support remediation and decision-making. | The AI produces actionable security alerts, risk scores, and policy compliance reports that identify potential exposure of PHI/PII. Outputs include prioritized incidents, visualized trends in application usage, anomaly detection summaries, and automated compliance audit logs to support remediation and decision-making. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3898 | Lung Cancer Prediction Model | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | 1. Treatment often proves difficult when diagnosed in advanced stages - long term outcomes are significantly improved when detected early. Screening programs are underutilized and definitively lacking in specific populations 2. This includes individuals at highest risk of lung cancer including US veterans. The Dayton VA Medical Center (DVAMC) initiative is intended to aid in detection of suspicious pulmonary nodules, expedite referrals for further evaluation and improve access to timely cancer care. In patients undergoing screening CTs, approx 50% of adults will have at least one lung nodule in their lifetime 3. Tools that provide aided detection and personalized risk of development of lung cancer may identify individuals that could require additional or more frequent screening. Sybil is a deep learning model to predict future lung cancer risk from a single low-dose chest computed tomography scan (LDCT) 4. The AI model utilizes one LDCT and the assistance of a radiologist to predict the risk of a patient developing lung cancer within six years. No additional clinical data is required. It has been validated on 3 independent data sets totaling almost 30K LDCTs. When a CT is processed by the algorithm, a series of 6 numbers corresponding to the patients aggregated risk of developing lung cancer in over 6 years. The hope is Sybil will allow lung cancer screening programs to be better utilized and promote increased care for at risk populations. | When fully implemented, it can be used to personalize screening regime, calling high risk patients out earlier to identify cancers in the early stages and potentially reducing the screening burden on low risk patients. | Descriptive statistics and basic analysis will be carried out using Excel. Descriptive statistics will analyze the data to be gathered as indicated, including ace, race, sex, and smoking status of patients. Sybil AI will calculate the sensitivity and specificity of predicted lung cancer rates to perform a receiver operating characteristic (ROC) analysis to validate the accuracy of the model. Power analysis may be performed to confirm the sample size needed with a significance level of 0.05 and a power of 0.80 using previously published area-under-the-curve data. Additional analysis such as logistic regression will be carried out using SPSS (Cary, NC). A statistician employed by Wright State University may assist with analysis of de-identified data only and after signing a dedicated data use agreement per the Dayton VAMC privacy policy. | Descriptive statistics and basic analysis will be carried out using Excel. Descriptive statistics will analyze the data to be gathered as indicated, including ace, race, sex, and smoking status of patients. Sybil AI will calculate the sensitivity and specificity of predicted lung cancer rates to perform a receiver operating characteristic (ROC) analysis to validate the accuracy of the model. Power analysis may be performed to confirm the sample size needed with a significance level of 0.05 and a power of 0.80 using previously published area-under-the-curve data. Additional analysis such as logistic regression will be carried out using SPSS (Cary, NC). A statistician employed by Wright State University may assist with analysis of de-identified data only and after signing a dedicated data use agreement per the Dayton VAMC privacy policy. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-421 | Patient Care Services Integration Platform (PCSIP) | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | a) High-impact | High-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | 1. Improve patient flow at VAMCs so that 20% more Veterans are provided care with existing staff and resources. 2. Enable use of FDA approved devices in FISMA compliant manner. 3. Optimize veteran journey from direct care to community care and back. | 1. Improve patient flow at VAMCs so that 20% more Veterans are provided care with existing staff and resources. 2. Enable use of FDA approved devices in FISMA compliant manner so that staff can get the job done easily from VA provided workstations for authentication, authorization, access for read and write to multiple clinical systems 3. Optimize veteran journey from direct care to community care and back. | 1. Estimated time for appointment for patient in Direct Care within VA and Community Care. 2. Estimated time for patient to reach PACU or Wards in Surgery Workflow. Estimated time for Discharge of patient in Emergency, Surgery, and Wards. 3. Patient Past Medical History Summary generated from Patient Medical Information in Vista/CPRS, Cerner and combined with Outside Records received in PDF format. Patient Summary generated is customized per specialty and per workflow e.g. Emergency, Surgery (Cardiology, Oncology, Orthopedics, Gastroenterology), Anesthesia, Behavioral Health, In-patient Wards. 4. Clinical summary of care provided summarized from Patient Data in EHRs (Epic, Cerner, Meditech, Vista/CPRS) and Ambient Dictation generated note. 5. ICD and CPT codes generated for Ambient Dictation notes for billing optimization. 6. Care Pathways determined from Ambient Dictation between care providers and patients. 7. Case Management notes generated from calls between Case Managers and Care Providers internally within the VA. 8. Ambient AI generated notes specific for workflows and specialties including Emergency, Radiation Oncology, Surgery, Wards, Behavioral Health and Primary Care. 9. Radiology protocoling notes generated from Patient Info in EHRs and Ambient Dictation. | 1. Estimated time for appointment for patient in Direct Care within VA and Community Care. 2. Estimated time for patient to reach PACU or Wards in Surgery Workflow. Estimated time for Discharge of patient in Emergency, Surgery, and Wards. 3. Patient Past Medical History Summary generated from Patient Medical Information in Vista/CPRS, Cerner and combined with Outside Records received in PDF format. Patient Summary generated is customized per specialty and per workflow e.g. Emergency, Surgery (Cardiology, Oncology, Orthopedics, Gastroenterology), Anesthesia, Behavioral Health, In-patient Wards. 4. Clinical summary of care provided summarized from Patient Data in EHRs (Epic, Cerner, Meditech, Vista/CPRS) and Ambient Dictation generated note. 5. ICD and CPT codes generated for Ambient Dictation notes for billing optimization. 6. Care Pathways determined from Ambient Dictation between care providers and patients. 7. Case Management notes generated from calls between Case Managers and Care Providers internally within the VA. 8. Ambient AI generated notes specific for workflows and specialties including Emergency, Radiation Oncology, Surgery, Wards, Behavioral Health and Primary Care. 9. Radiology protocoling notes generated from Patient Info in EHRs and Ambient Dictation. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2831 | CITC Automated Fax Relabeling | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Speed intake of outside medical records. Currently the VA system as a whole gets millions of pages of faxes a day. The subset of these related to care in the community (CITC) contain results of care that the VA itself has paid for in non-VA facilities. As CITC use increases, the burden of processing these faxes (uploading them and making them useful for VA physicians) is increasing. Delays in upload time lead to missing information about non-VA care. | Enhancing patient outcomes (results of care that we are paying for is made more quickly available within the VA); increased efficiency (reduce or eliminate tedious nursing work of manually renaming faxes). | Renamed faxes (with patient identifiers and document classification). The software is run by care in the community (CITC) staff (on their local GFE). These staff 'point' the fax processer program to the folder that contains inbound faxes. The software creates a subdirectory folder inside of the inbound fax folder and the output faxes (relabeled) are put into the output subdirectory. | Renamed faxes (with patient identifiers and document classification). The software is run by care in the community (CITC) staff (on their local GFE). These staff 'point' the fax processer program to the folder that contains inbound faxes. The software creates a subdirectory folder inside of the inbound fax folder and the output faxes (relabeled) are put into the output subdirectory. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5375 | Screen for Esophageal Adenocarcinoma | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Veterans disproportionately have many of the important risk factors for esophageal adenocarcinoma (EAC), and research has demonstrated that Veterans are a substantial gap in the prevention of EAC. Healthcare providers can utilize Kettles Esophageal and Cardia Adenocarcinoma prediction tool (K-ECAN) to guide decision-making and identify potential patients who should be offered screening for EAC. | If EAC is identified at the earliest stage (intramucosal cancer without metastases), the 5-year survival with surgery is greater than 85%, and at least as good with endoscopic therapy alone. Implementation of such an intervention would bridge the gap between population-health screenings and personalized management in a manner that would minimize the impact on payer budget and resources. Since individuals are more commonly up to date for colorectal cancer screening (for instance, with colonoscopy), it is particularly suitable to harness the missed opportunity of EAC screening at the time of screening for colorectal cancer. Also, when an EAC screening with upper endoscopy is performed simultaneously with colonoscopy, it is much less expensive. This project will also serve as a roadmap for how such precision approaches to screening can be applied to other rare cancers. | The K-ECAN is an automated prediction tool for EAC that harnesses the electronic health record at the point of contact during opportune moments (such as at the time of scheduling colorectal cancer screening) to guide decision-making and identify patients who should be offered screenings for EAC. The provider would know their patient's predicted annual incidence of EAC or esophagogastric junction adenocarcinoma (EGJAC) (at least [XXX per 100,000) and their predicted annual mortality (at least [YYY] per 100,000). They would also know additional risk factors, such as age, sex, Body Mass Index (BMI), Gastroesophageal reflux disease (GERD), and smoking history. | The K-ECAN is an automated prediction tool for EAC that harnesses the electronic health record at the point of contact during opportune moments (such as at the time of scheduling colorectal cancer screening) to guide decision-making and identify patients who should be offered screenings for EAC. The provider would know their patient's predicted annual incidence of EAC or esophagogastric junction adenocarcinoma (EGJAC) (at least [XXX per 100,000) and their predicted annual mortality (at least [YYY] per 100,000). They would also know additional risk factors, such as age, sex, Body Mass Index (BMI), Gastroesophageal reflux disease (GERD), and smoking history. | Yes | https://osf.io/tvu8z/?view_only=0f912ec4a63d410abac4943426bc92f2 | ||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1143 | VistA Records Accessed by SQL (VRAS) | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | The VistA Records Accessed by SQL (VRAS) is a transformative initiative aimed at modernizing data accessibility and analytics within the Department of Veterans Affairs (VA). By eliminating reliance on costly middleware and reducing process overhead, VRAS enables real-time, SQL compatible data access directly from Vista. This initiative establishes a seamless, efficient data pipeline, enhancing reporting capabilities, reducing infrastructure complexity, and supporting data-driven decision-making across all VistA instances. With improved accessibility to high-quality data, VA users can generate timely, accurate insights that drive operational efficiency and better outcomes for departments that directly affect veteran care. | Full AI access to each VistA instance via ODBC/JDBC. | All VistA Data, from Supply chain to Medical records. | All VistA Data, from Supply chain to Medical records. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3723 | Research and Development Computer Center (RDCC) System Transformation AI Supported Reporting | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | The goal of our AI use case is to manage grants and supporting reporting related to the grants allocation. | Cost savings, increased efficiency | Generating reports related to grants allocation for congressional reporting | Generating reports related to grants allocation for congressional reporting | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5101 | Categorization and Labelling Administrative Requests | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | The categorization and labeling functionality is intended to support functions such as administrative intake requests. This is intended to support use cases such as rapid triage of Information Technology intake requests. This solves the problem of manual data entry. | Increased efficiency. | 1. The system selects a category based on a list of categories defined by a person. 2. The system labels long text fields in under 50 characters. | 1. The system selects a category based on a list of categories defined by a person. 2. The system labels long text fields in under 50 characters. | Yes | https://github.com/department-of-veterans-affairs/LEAF/tree/agent/LEAF_Agent | ||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-651 | AI Incident Management | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The proposed AI system, referred to as the AI Incident Management (AIM), is designed to aid in faster problem determination and remediation during High Priority (HPI) and Critical Priority (CPI) Incidents impacting the performance or availability of critical systems. | By leveraging historical data, real-time performance monitoring, and advanced machine learning models, the system will identify potential causes and suggest remediation actions in real time, significantly reducing downtime and improving system reliability and resiliency. | 1. Aggregate data from multiple monitoring and incident management tools. 2. Use a Large Language Model (LLM) to analyze, correlate, and interpret this data. 3. Provide actionable insights and recommendations to IT personnel during incident triage. | 1. Aggregate data from multiple monitoring and incident management tools. 2. Use a Large Language Model (LLM) to analyze, correlate, and interpret this data. 3. Provide actionable insights and recommendations to IT personnel during incident triage. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-528 | VA FSC AI ChatHelper | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Procurement & Financial Management | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Customer Service Representatives (CSR’s) can ask their VA Financial Services Center (FSC) AI ChatHelper questions and receive policy and standards-based answers (e.g., Top Resolution Scripts, Historical Case Notes or Resolutions, Reference Links, Text, or Visual instructions) to converse with Customers through assistance from their VA FSC AI ChatHelper until achieving a final satisfactory resolution per the Customer and being able to provide feedback on the relevance and accuracy of VA FSC ChatHelper responses. This solves the problem of a standards-based knowledge source for FSC's help desk CSRs. | Increasing help ticket capacity for additional volume, increased efficiency and cost avoidance | Knowledge Base content, help desk resolutions, recommended solutions to customer inquiries | Knowledge Base content, help desk resolutions, recommended solutions to customer inquiries | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2143 | Call Center Knowledge Navigator | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Government Benefits Processing | Pilot | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | 1. Inaccurate transcription of phone calls by the current system. 2. Need for enhanced time-to-effectiveness for new employees of the Education Call Center. 3. Improvements to quality review of transcribed conversations. | Increased efficiency and increased accuracy of answers to beneficiary queries | - Version 1 outputs a string in response to a user query with cited reference materials - Version 2 will hyperlink these materials for user reference - Future versions will include transcription output and automated answer strings based on live transcription | - Version 1 outputs a string in response to a user query with cited reference materials - Version 2 will hyperlink these materials for user reference - Future versions will include transcription output and automated answer strings based on live transcription | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4804 | Master Claims Assistance Tool (M-CAT) | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Government Benefits Processing | Pilot | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | M-CAT is designed to automatically classify, extract, and validate critical information from diverse regulatory and/or policy references. This includes the automation of complex data extraction tasks, which are traditionally time-consuming and prone to human error. M-CAT provides quick access to regulatory and/or policy guidance and summarizes information within the prescribed Retrieval-Augmented Generation (RAG) model. These retrieval results are maintained for each user and aggregated to inform focused training support needs. The overall generated response includes contextual integration, interactive elements for source expansion, feedback mechanism, displays current trending topics as suggested queries, and provides prompted follow-up questions post query. | M-CAT provides quick access to regulatory and/or policy guidance and summarizes information within the prescribed RAG model, guiding end-users to achieve a clearer understanding and clearer implementation of VA policy. Access to this information improves processing capabilities which immediately transition to more timely and accurate delivery of benefits. | M-CAT provides quick access to regulatory and/or policy guidance and summarizes information within the prescribed RAG model. These retrieval results are maintained for each user and aggregated to inform focused training support needs. The overall generated response includes contextual integration, interactive elements for source expansion, feedback mechanism, displaying current trending topics as suggested queries, and providing prompted follow-up questions post query. | M-CAT provides quick access to regulatory and/or policy guidance and summarizes information within the prescribed RAG model. These retrieval results are maintained for each user and aggregated to inform focused training support needs. The overall generated response includes contextual integration, interactive elements for source expansion, feedback mechanism, displaying current trending topics as suggested queries, and providing prompted follow-up questions post query. | Yes | https://huggingface.co/meta-llama/Llama-3.1-8B / https://huggingface.co/docs/transformers/en/model_doc/mixtral / https://www.ibm.com/docs/en/watsonx/saas?topic=ai-risk-atlas | ||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2307 | Audit Service Connection Designations Associated with Prescriptions | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Reduce waste or fraud. The model predicts the service connection flag associated with prescriptions. It uses drug information, and service connected disabilities. | Cost savings, increased efficiency, and increased revenue. | The model predicts the service connection flag associated with prescriptions. It uses drug information, and service connected disabilities. The outputs are binary – Yes/No. | The model predicts the service connection flag associated with prescriptions. It uses drug information, and service connected disabilities. The outputs are binary – Yes/No. | No | https://h2o.ai/platform/ai-cloud/make/h2o/ | ||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2791 | Internet of Medical Things (IoMT) Inventory | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Cybersecurity | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Automated categorization of medical devices based on MAC Address, installed software, network activity, etc.: the previous method was to schedule down time with clinical staff to manually verify if patches and vulnerabilities were installed and if not, manually install them. | Used to identify devices and report operating system (OS) updates and vulnerability patches that have been installed to meet agency's NIST requirements. This will result in savings in personnel time (work hours) and reduce the downtime of a medical device. | AI Use Case has AI via ML collect and organize information, then a person acts on it. A person generates a report and AI then collects and organizes the information in a format the user requested. The output will allow the user to see what systems have and have not been patched, are properly configured, etc. | AI Use Case has AI via ML collect and organize information, then a person acts on it. A person generates a report and AI then collects and organizes the information in a format the user requested. The output will allow the user to see what systems have and have not been patched, are properly configured, etc. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-294 | Nurse Proficiency Writer (NPW) | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Human Resources | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Many nurses lack knowledge of the standards and the requirements needed to meet their performance expectations. The GPT assists the nurse by evaluating their input against the standards. The dialogue between the GPT and the nurse results in improvements in the quality of the submission. The GPTs help to recognize all the nurses' accomplishments and offer suggestions for improvement. In the current performance process, the nurse is rated against the standards, but feedback is often lacking. The quality of the performance report often relies on the supervisor’s writing ability and understanding of the standards. The benefit to the supervisor is that quality content is submitted for evaluation and incorporation. This saves both the nurse and supervisor time. The tool removes bias as it evaluates submissions against standards that are the same regardless of the submitter. This same process assists with promotion contributions. With the dissolution of the Nursing Professional Standards Board, it directly supports frontline nurses (as per expert GPT in knowledge standards). Lastly, the Supervisor is ultimately responsible for knowing their employee and evaluating their performance. The GPT output can enhance communication; however, the decisions are solely the responsibility of the nurse supervisor (low risk to reward). The time ROI is from hours to minutes. The internal development of this tool has the potential to help 120,000 Nurses. | Offers Standardization of the Nurse Review Process: The Lamplighter AI Chatbot eradicates subjectivity and ensures objective, consistent evaluations, revolutionizing the standard nurse review process. Embedded Nursing Standards and Time-Efficient Guidance The Lamplighter AI Chatbot overcomes the challenges of standard knowledge gaps and time constraints by integrating expert guidance, facilitating efficient and informed review completion. Assistance with Professional Narrative Creation It addresses the dependence on writing skills by providing narrative assistance, allowing nurses to accurately document their performance based on their professional skills, not their writing ability. Maximizing Return on Investment It enhances the return on investment for the review process, targeting the root cause of low ROI by streamlining the workflow, which translates nurses' and managers' efforts into meaningful feedback. Efficient and Tailored Review Tool Designed to combat process inefficiency, the LampLighter AI Chatbot refines the review process, saving time and significantly improving the quality of performance evaluations. | GPT text that provides narrative for ePerformance integration, textual contribution for promotion (including plans), and skills recommendation to improve performance. | GPT text that provides narrative for ePerformance integration, textual contribution for promotion (including plans), and skills recommendation to improve performance. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5956 | Ask Sage – Generative AI Platform for VA Administrative Efficiency | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | VA staff often spend significant time searching for information, drafting communications, and compiling documentation. These tasks can be repetitive, time-consuming, and prone to inconsistencies when performed manually. Ask Sage addresses this by providing a secure, conversational interface that can quickly produce draft materials, summarize complex information, and retrieve relevant guidance, enabling staff to focus on higher-value work. | Increased administrative efficiency through faster document drafting and information retrieval. Greater consistency and quality in written materials and policy summaries. Reduced workload for staff, allowing more focus on mission-critical duties. Cost savings from reduced staff time spent on repetitive, manual tasks. Improved timeliness and responsiveness in internal and external communications. | Draft correspondence, reports, and summaries for staff review. Condensed explanations of policies, regulations, and procedures. Suggested formats, outlines, and checklists for administrative processes. Contextual research support and topic briefings. | Draft correspondence, reports, and summaries for staff review. Condensed explanations of policies, regulations, and procedures. Suggested formats, outlines, and checklists for administrative processes. Contextual research support and topic briefings. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2778 | AI Transcription and Summary in Veteran Narrative Medicine Program | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | 1) The MyLife MyStory program tends to be time intensive, often requiring about four hours of the interviewers' time per patient, which limits the number of Veterans who can receive the program. 2) The narratives that are produced in MyLife MyStory are sometimes lengthy. When provided with an AI-generated summary, busy clinicians accessing the narrative in a patient's chart may come away with a clearer understanding of the patient's story. | 1) Greater efficiency in the MyLife MyStory program with preserved acceptability, allowing for more Veterans to enjoy and benefit from the program. 2) Due to the inclusion of patient-approved, AI-generated summaries, busy clinicians accessing MyLife MyStory narratives in patients' charts will more often come away with a clear understanding of the patients' experiences through their stories | 1) Transcriptions of patient interviews, generated using natural language processing. 2) An AI-generated summary of each patient's narrative. | 1) Transcriptions of patient interviews, generated using natural language processing. 2) An AI-generated summary of each patient's narrative. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3029 | Sharing RVU Provider Metrics | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Next Fiscal year, the VA will be focusing heavily on RVUs to measure provider productivity. However, the data currently can only be found in Pyramid Analytics, which is very difficult to use and very few people have access to it. This leaves all frontline employees, and many supervisors, in the dark as far as whether they are meeting their performance metrics. | This AI model will read the data from the RVU report and send an individual email to each employee, with their supervisor CC'ed, so that they know where they stand on their RVU metrics. | Individual emails to employees and supervisors. | Individual emails to employees and supervisors. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3152 | Program Evaluation for Commission on Collegiate Nursing Education (CCNE) | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | To better understand and manage large volumes of written data, that assist in identifying student trends and curriculum gaps, and evaluating the program, utilizing CCNE standards and expected outcomes. | - Improved patient safety by enhancing learner outcomes and assisting and guiding nursing resident progress from advanced beginner to competent professional. - Increased efficiency by reducing the labor hours spent evaluating large volumes of data. The saved time can be shifted to curriculum development, resident oversight, and program improvement. - Assisting in attaining Registered Nurse Transition-to-Practice (RNTTP) program CCNE accreditation. | The reports identify positive program trends and areas of improvement. AI will also be used in evaluating resident trajectory utilizing Patricia Benner's novice to expert theory (NTE). VAGPT as tested has already demonstrated (specificity) for identifying nurse resident progress related to Benner's NTE. | The reports identify positive program trends and areas of improvement. AI will also be used in evaluating resident trajectory utilizing Patricia Benner's novice to expert theory (NTE). VAGPT as tested has already demonstrated (specificity) for identifying nurse resident progress related to Benner's NTE. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1938 | Automated Claims Processing | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Government Benefits Processing | Deployed | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | The purpose of the AI is to automate aspects of NCA benefits and application processing to decrease the case processing time and increase the customer service delivered to Veterans and their next of kin. Currently, it is applicable to the Pre-Need and Presidential Memorial Certificate programs. In the future, other business lines may be included, such as the Headstones, Markers, Medallions, and Urns and Plaques programs. This benefits NCA by automating aspects of the workload (data entry, case creation, document digitization, letter creation, and status updates). | The outcomes support expedited service delivery of benefits to Veterans and their next of kin. About 30% of Veteran applicants receive their Pre-Need eligibility determinations within 24 hours, without human intervention. | The Hyperscience platform (optical character recognition (OCR), natural language processing (NLP), and machine learning (ML)) extracts document data, uses application programming interfaces (APIs) to query data from other VA systems, and applies robotic process automation (RPA) to enter data into existing legacy NCA systems: Eligibility Office Automation System and Web Presidential Memorial Certificates. As the case is processed, automation creates documents that combine relevant claimant information (for use by claim agents), updates statuses, and creates letters, as appropriate. No original content is created through these processes. | The Hyperscience platform (optical character recognition (OCR), natural language processing (NLP), and machine learning (ML)) extracts document data, uses application programming interfaces (APIs) to query data from other VA systems, and applies robotic process automation (RPA) to enter data into existing legacy NCA systems: Eligibility Office Automation System and Web Presidential Memorial Certificates. As the case is processed, automation creates documents that combine relevant claimant information (for use by claim agents), updates statuses, and creates letters, as appropriate. No original content is created through these processes. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2430 | TERA Memorandum Automation | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Government Benefits Processing | Deployed | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | The Toxic Exposure Risk Activity (TERA) Memo automation is intended to reduce the time that users spend manually researching and submitting answers for the TERA memo which is a required form. | The benefit of TERA memo automation is a significant reduction in the time claim processors spend filling out the TERA memo, allowing for more efficient claim processing. | The outputs of the TERA memo automation are pre-populated answers to questions on the TERA form, using data sourced from Veteran records and documents. | The outputs of the TERA memo automation are pre-populated answers to questions on the TERA form, using data sourced from Veteran records and documents. | Yes | Python | ||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2717 | VA.gov Chatbot: Summative Content | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | AI can provide around-the-clock support without long wait times, allowing our human call center agents to help individuals with more complex concerns. The virtual agent will allow veterans to self-serve, ask questions at their own pace, complete tasks, and find answers and information more easily. | Veterans receive increased access to self-service capabilities - the virtual agent will allow veterans to self-serve, ask questions at their own pace, complete tasks, and find answers and information more easily. | Benefits information based on VA.gov content. | Benefits information based on VA.gov content. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2537 | Heuristic Behavior Analytics | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Identify anomalous behaviors from established baselines for VA Accounts and Hosts. The output is a Risk score that is provided to CSOC Analysts to more quickly identify accounts and hosts that have been compromised after the initial authentications associated with a computational session | Quicker identification, response, and mitigation of cybersecurity incidents. | Risk scores for Accounts and Hosts, based on a comparison of recent and historic activities. | Risk scores for Accounts and Hosts, based on a comparison of recent and historic activities. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3275 | Concept Clustering | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Electronic discovery (e-discovery) refers to discovery in legal proceedings such as litigation, government investigations, or Freedom of Information Act (FOIA) requests, where the information sought is in electronic format. As data volumes and the number of civil cases continue to rise, so does the need tools to help government agencies manage electronically stored information used for discovery for litigation, investigations, and FOIA requests. This AI model analyzes the content of documents to identify contextually similar clusters of text. It groups similar documents automatically to reveal themes, without predefined keywords. Runs once on workspace text and metadata to group documents by similarity. No external data or ongoing training. | The expected benefits include reduced manual effort in organizing and identifying related documents, streamlined information retrieval, and improved efficiency for VA staff. Emphasis is placed on cost savings through reduction of labor-intensive processes, while also enhancing user experience by improving search and discovery. | Identified Cluster groups | Identified Cluster groups | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-36 | Evolv WDS - OSLE | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | AI driven security screening system designed to detect concealed weapons and threats in high-traffic areas. AI is used to conduct scans of objects in the shapes of weapons then develops maps of the body to display the location and shape of the object. | Occupant screening for weapons to ensure that facilities are creating a safe and secure environment for Veterans, Visitors, and Employees. | AI is used to conduct scans of objects in the shapes of weapons then develops maps of the body to display the location and shape of the object. | AI is used to conduct scans of objects in the shapes of weapons then develops maps of the body to display the location and shape of the object. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-77 | Avigilon Camera/Search Function - OSSO | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | A search function in a security video surveillance system that searches for points of interest in the video. | The integration of artificial intelligence (AI) into VA Police operations has significantly enhanced efficiency and response capabilities. Traditionally, personnel were required to manually review recorded video and images which was a time-consuming process. With AI, these records can now be autonomously searched, drastically reducing manpower hours. Moreover, during emergent situations, AI has proven instrumental in locating persons of interest, such as missing individuals or patients. This capability has greatly improved the VA Police’s ability to respond swiftly and effectively, ultimately enhancing safety and operational readiness. | Video or images of requested searches from the video surveillance systems. | Video or images of requested searches from the video surveillance systems. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2184 | National Training Team | Schools — FAQ Gen AI Dashboard | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Government Benefits Processing | Deployed | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Categorization of large volume question submissions with suggested generated answers based on past-provided information. | increased efficiency. There are currently six permanent members of the NTT|S staff and approximately 40,000 SCOs. It is physically impossible for a staff this size to answer all of the questions they ask in a given session. The model reduces the count of overall questions to a reasonable amount answerable by a subject matter expert (SME). It also uses previous answers provided by SMEs in the case of duplicate or similar questions to speed preparation of FAQ documentation for publication following a session — if they’ve answered a question before, they don’t need to spend time answering it again. Currently, the project has fallen below expectations in terms of usefulness to the end users. NTT|S suffered a major operational setback earlier this year when the contract with Adobe Connect was severed unexpectedly by the new administration. Implementing Office Hours using Microsoft Teams as an alternative has had several obstacles, including a substantial decrease in questions SCOs are able to submit within a session. Because of the lower volume, SMEs can often answer most questions within a session and review remaining questions without the assistance of the model. | The system outputs data to a Microsoft Power BI report — the report lists the top ten categories of questions asked within a particular office hours session with a suggested question based off submissions and suggested answers to the generated question based off previously provided answers in FAQ documentation. | The system outputs data to a Microsoft Power BI report — the report lists the top ten categories of questions asked within a particular office hours session with a suggested question based off submissions and suggested answers to the generated question based off previously provided answers in FAQ documentation. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3291 | Payment Redirect Fraud (PRF) Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The goal of the Payment Redirect Fraud (PRF) model is to identify which Direct Deposit (DD) changes are likely to be fraudulent and refer them on to investigators. | Preventing Veterans from having their benefits stolen by fraudsters builds their trust in the VA. The accurate delivery of benefits without interruption creates operational efficiencies and demonstrates fiscal stewardship and prudent use of tax-payer dollars. | The output is a risk indicator on how likely each daily direct deposit change may be fraudulent. | The output is a risk indicator on how likely each daily direct deposit change may be fraudulent. | Yes | https://scikit-learn.org/stable/ | ||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4722 | Automated Decision Support | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Government Benefits Processing | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Identify medical evidence for claims processing. Automated Decision Support (ADS) is the utilization of technology to maximize operational efficiency by reducing administrative tasks to produce desired outcomes. These tools will improve the efficiency of the existing claims process, reduce future claims backlog, and will result in faster, more accurate and consistent decisions for Veterans and beneficiaries. ADS is not intended to replace trained claims processors – it provides tools to assist with development tasks at a time when VBA is receiving more claims than ever before. | Automated Decision Support (ADS) automates some of the up-front time-consuming development activities of retrieving information and identifying potentially relevant VA Schedule of Rating Disabilities (VASRD)-related evidence of record. It accelerates the ordering of exams and assists in marking claims as Ready for Decision (RFD). This can lead to faster and more consistent decisions for Veterans and unlock VBA’s ability to uncover data trends. | Depending on claim criteria, ADS may produce the following five documents: - Automated Review Summary Document (ARSD) - Summarizes relevant medical and service information from Veteran documents with page numbers and hyperlinks to the original source documents. It also explains the ADS outcome. - Health Data Repository (HDR) document - Indexes Veteran medical records data from VistA sources (VAMC and Community Care visits) - Standard Commercial-Off-The-Shelf (COTS) Integration Platform (SCIP) document - Indexes Veteran medical images from VistA sources - Electronic Health Record-Text (EHR-Text) document - Indexes Veteran medical records data from EHR sites - Electronic Health Record-Image (EHR-Image) document - Indexes Veteran medical images from EHR sites ADS also takes action in the Veterans Benefits Management System (VBMS) to update end product (EP) status, order or draft Compensation & Pension (C&P) exams, and add notes. No final benefits decisions are made by ADS and no payments are initiated by ADS. | Depending on claim criteria, ADS may produce the following five documents: - Automated Review Summary Document (ARSD) - Summarizes relevant medical and service information from Veteran documents with page numbers and hyperlinks to the original source documents. It also explains the ADS outcome. - Health Data Repository (HDR) document - Indexes Veteran medical records data from VistA sources (VAMC and Community Care visits) - Standard Commercial-Off-The-Shelf (COTS) Integration Platform (SCIP) document - Indexes Veteran medical images from VistA sources - Electronic Health Record-Text (EHR-Text) document - Indexes Veteran medical records data from EHR sites - Electronic Health Record-Image (EHR-Image) document - Indexes Veteran medical images from EHR sites ADS also takes action in the Veterans Benefits Management System (VBMS) to update end product (EP) status, order or draft Compensation & Pension (C&P) exams, and add notes. No final benefits decisions are made by ADS and no payments are initiated by ADS. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4763 | Mail Automation Services | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Government Benefits Processing | Deployed | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Mail Automation Services (MAS) is an if-this-then that (ITTT) logic-based service which is utilized to process inbound mail across the VBA enterprise, including the receipt of claims materials from Veterans, Veteran Service Organizations, and other stakeholders. The platform ingests and reviews approximately 40k packets of information per day, primarily derived from the Centralized Mail Portal. It utilizes prescribed machine learning and Natural Language Processing (NLP), Intelligent Form Recognition (IFR), Optical Character Recognition (OCR), and Intelligent Character Recognition (ICR) to extract and process data from an average of 95 fields on over 1,500 form layouts within VA’s purview. | MAS has completed over 2 million actions in the 2024 calendar year to date. It completes initial intake processing actions to support workload and claims management and has led to cost savings and increased efficiency - cost savings and increased efficiency data is available upon request. | The most common outcomes are VBA End-Product establishment within the Veterans Benefits Management System (VBMS) and correct business line orientation for ingested submissions. However, MAS also triggers correspondence in certain circumstances and creates or adjusts VBMS notes, tracked items, and flashes. The MAS platform runs under the Veterans Benefits Administration Automation Platform (VBAAP). | The most common outcomes are VBA End-Product establishment within the Veterans Benefits Management System (VBMS) and correct business line orientation for ingested submissions. However, MAS also triggers correspondence in certain circumstances and creates or adjusts VBMS notes, tracked items, and flashes. The MAS platform runs under the Veterans Benefits Administration Automation Platform (VBAAP). | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-790 | Pension Optimization Initiative (POI) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Government Benefits Processing | Deployed | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | VBA’s POI is a transformative effort that seeks a Managed Services Provider (MSP) to convert an existing manual business process into automated processing of Pension, Dependency Indemnity Compensation (DIC), and Burial claims. This effort is focused on dramatically improving the Veteran Experience while significantly reducing costs. POI automation will process Pension, DIC, and Burial claims more quickly, consistently, and efficiently. | Reduced average days to complete claims processing. Reduced claims inventory. Reduced amount of manual claims processing hours to accomplish mission. | 1) updated Veteran/Beneficiary files, 2) completed Pension, DIC, and Burial-related claim actions to include stop and start of awards (delivery of benefits), 3) Notification Letters to Claimants. | 1) updated Veteran/Beneficiary files, 2) completed Pension, DIC, and Burial-related claim actions to include stop and start of awards (delivery of benefits), 3) Notification Letters to Claimants. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1069 | AGFA DR 800 with MUSICA Dynamic | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | It can accelerate X-ray workflows. The optional image processing allows users to conveniently select image processing settings for different patient sizes and examinations. The image processing algorithms in the new device are similar to those previously cleared and used in Agfa’s radiography portfolio today which includes the DR 600 (K152639) and DR 400 (K141192). The addition of the dynamic image processing is identical to the predicate device (K140380). | Accelerate X-Ray workflows. | Recommendations for image processing settings. | Recommendations for image processing settings. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1237 | Beckman Coulter DxH 800 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Generating blood cell measurements. The DxH 800 System provides automated complete blood count, leukocyte differential, nucleated red blood cell (NRBC) enumeration and reticulocyte analysis as well as an automated method for enumeration of the Total Nucleated Cells (TNC) and Red Blood Cells (RBC) in body fluids. | Accelerate blood cell measurements. | The DxH 800 System provides automated complete blood count, leukocyte differential, nucleated red blood cell (NRBC) enumeration and reticulocyte analysis as well as an automated method for enumeration of the Total Nucleated Cells (TNC) and Red Blood Cells (RBC) in body fluids. | The DxH 800 System provides automated complete blood count, leukocyte differential, nucleated red blood cell (NRBC) enumeration and reticulocyte analysis as well as an automated method for enumeration of the Total Nucleated Cells (TNC) and Red Blood Cells (RBC) in body fluids. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1397 | Xeleris V | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Xeleris V AI-enabled clinical applications provide an accurate, efficient, and reproducible way to segment organs. AI-enabled clinical applications have the potential to greatly improve efficiency and precision in nuclear oncology. | Xeleris V leverages AI to accurately segment organs for quantitation and dosimetry calculations. The AI demonstrated a 58% average reduction in the time required for the user to process and calculate the dose. This AI-enabled lung segmentation demonstrated an overall success rate of 89% and reduction in the number of clicks by an average 57%. | Automated segmentation and dosimetry recommendations. | Automated segmentation and dosimetry recommendations. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-163 | GE Signa Artist | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The software used on the proposed SIGNA Artist system has been modified to include the AIR Recon DL feature. The User interface provides operators of the system with new options for selecting AIR Recon DL and adjusting the associated level of image noise reduction. The resulting images can have higher Signal to Noise Ratio (SNR) and improved sharpness compared to images reconstructed without AIR Recon DL. | The nonclinical testing demonstrated that AIR Recon DL does improve SNR and image sharpness while maintaining low contrast detectability and having minimal impacts to noise spectral content, average signal intensity, or the appearance of motion artifacts. AIR Recon DL was also able to maintain image SNR and did not sacrifice sharpness for images acquired with a reduced scan time. The nonclinical testing passed the defined acceptance criteria, and did not identify any adverse impacts to image quality or other concerns related to safety and performance. | Denoised images. The images produced by the SIGNA Artist system reflect the spatial distribution or molecular environment of nuclei exhibiting magnetic resonance. | Denoised images. The images produced by the SIGNA Artist system reflect the spatial distribution or molecular environment of nuclei exhibiting magnetic resonance. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1684 | Computer Aided Detection (CADe) of Neoplasia during Colonoscopy - “GI Genius” | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The device improves endoscopy quality by aiding neoplastic lesion detection during procedures. Multiple studies and meta-analyses have found these devices increase adenoma detection rates (ADR), a well-established metric for colonoscopy quality that is related to colon cancer incidence and death. Randomized implementation of the devices across VHA facilities allowed for a pragmatic evaluation of the impact of these devices on ADR. The evaluation demonstrated that the provision of colonoscopy Computer Aided Detection devices resulted in a statistically significant 21% increase in the odds of adenoma detection and an absolute increase in ADR of approximately 4% compared to colonoscopy without CADe. | These devices demonstrably improve endoscopy quality within the VHA, which in turn improves outcomes for Veterans, ultimately reducing morbidity and mortality. Multiple studies and meta-analyses have found these devices increase adenoma detection rates (ADR), a well-established metric for colonoscopy quality that is related to colon cancer incidence and death. Randomized implementation of the devices across VHA facilities allowed for a pragmatic evaluation of the impact of these devices on ADR. The evaluation demonstrated that the provision of colonoscopy Computer Aided Detection devices resulted in a statistically significant 21% increase in the odds of adenoma detection and an absolute increase in ADR of approximately 4% compared to colonoscopy without CADe. | During endoscopy, the CADe device automatically detects, and highlights suspected neoplastic lesions / polyps in real time. A highlighted rectangle automatically surrounds suspected lesions. No results or data are tracked or automatically documented. The device physically connects to existing endoscopes, video processors, and display monitors, but does not connect to the VA network. | During endoscopy, the CADe device automatically detects, and highlights suspected neoplastic lesions / polyps in real time. A highlighted rectangle automatically surrounds suspected lesions. No results or data are tracked or automatically documented. The device physically connects to existing endoscopes, video processors, and display monitors, but does not connect to the VA network. | No | https://www.medtronic.com/covidien/en-us/products/gastrointestinal-artificial-intelligence/gi-genius-intelligent-endoscopy/indications-safety-warnings.html | ||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1766 | TrueFidelity CT Deep Learning Image Reconstruction | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Improve quality of imaging - process the information acquired during a CT for form images used in medical grade diagnostic imaging | Improve quality of imaging of CT examinations | CT images | CT images | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1803 | AIDOC BriefCase | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | BriefCase is a radiological computer-assisted triage and notification software device. The software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment. The BriefCase receives filtered DICOM Images, and processes them chronologically by running the algorithms on each series to detect suspected cases. Following the AI processing, the output of the algorithm analysis is transferred to an image review software (desktop application). When a suspected case is detected, the user receives a pop-up notification and is presented with a compressed, low quality, grayscale image that is captioned “not for diagnostic use, for prioritization only” which is displayed as a preview function. This preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification. Presenting the users with worklist prioritization facilitates efficient triage by prompting the user to assess the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone. https://www.accessdata.fda.gov/cdrh_docs/pdf23/K230020.pdf | Presenting the users with worklist prioritization facilitates efficient triage by prompting the user to assess the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone. | When a suspected case is detected, the user receives a pop-up notification and is presented with a compressed, low quality, grayscale image that is captioned “not for diagnostic use, for prioritization only” which is displayed as a preview function. This preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification. | When a suspected case is detected, the user receives a pop-up notification and is presented with a compressed, low quality, grayscale image that is captioned “not for diagnostic use, for prioritization only” which is displayed as a preview function. This preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1848 | iCAD ProFound AI | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Detect breast malignancies. The ProFound Detection V4.0 software allows an interpreting physician to quickly identify suspicious soft tissue densities and calcifications by marking the detected areas in the tomosynthesis images. | Accelerate detection of potential malignancies. | The ProFound Detection V4.0 software allows an interpreting physician to quickly identify suspicious soft tissue densities and calcifications by marking the detected areas in the tomosynthesis images. When the ProFound Detection V4.0 marks are displayed by a user, the marks will appear as overlays on the tomosynthesis images. Each detected finding will also be assigned a “score” that corresponds to the ProFound Detection V4.0 algorithm’s confidence that the detected finding is a cancer (Certainty of Finding). Certainty of Finding scores are a percentage in range of 0% to 100% to indicate computer-assisted detection and diagnosis' (CAD) confidence that the finding is malignant. ProFound Detection V4.0 also assigns a score to each case (Case Score) as a percentage in range of 0% to 100% to indicate CAD’s confidence that the case has malignant findings. The higher the Certainty of Finding or Case Score, the higher the confidence that the detected finding is a cancer or that the case has malignant findings. | The ProFound Detection V4.0 software allows an interpreting physician to quickly identify suspicious soft tissue densities and calcifications by marking the detected areas in the tomosynthesis images. When the ProFound Detection V4.0 marks are displayed by a user, the marks will appear as overlays on the tomosynthesis images. Each detected finding will also be assigned a “score” that corresponds to the ProFound Detection V4.0 algorithm’s confidence that the detected finding is a cancer (Certainty of Finding). Certainty of Finding scores are a percentage in range of 0% to 100% to indicate computer-assisted detection and diagnosis' (CAD) confidence that the finding is malignant. ProFound Detection V4.0 also assigns a score to each case (Case Score) as a percentage in range of 0% to 100% to indicate CAD’s confidence that the case has malignant findings. The higher the Certainty of Finding or Case Score, the higher the confidence that the detected finding is a cancer or that the case has malignant findings. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1934 | Circle CVI42 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Combining digital image processing, visualization, quantification, and reporting tools, cvi42 is designed to support physicians in the evaluation and analysis of cardiovascular imaging studies. | cvi42 uses machine learning techniques to aid in semi-automatic contouring of regions of interest in cardiovascular magnetic resonance (MR) images and cardiovascular computed tomography (CT) images for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process. | cvi42 is intended to be used by qualified medical professionals for viewing, post-processing and quantitative evaluation of cardiovascular magnetic resonance (MR) images and cardiovascular computed tomography (CT) images in a Digital Imaging and Communications in Medicine Standard format. This is for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process | cvi42 is intended to be used by qualified medical professionals for viewing, post-processing and quantitative evaluation of cardiovascular magnetic resonance (MR) images and cardiovascular computed tomography (CT) images in a Digital Imaging and Communications in Medicine Standard format. This is for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2008 | VA CART Adenoma Detection | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Extracts adenoma status for CART Veterans using Natural Language Processing (NLP) of VistA colonoscopy surgical pathology reports | Leads to more informed personalized discussion with patients | Results contain 0 for no evidence of adenoma or 1 for evidence of adenoma in the pathology report | Results contain 0 for no evidence of adenoma or 1 for evidence of adenoma in the pathology report | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-245 | Siemens YSIO Max | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | New system control software realizes Free Axis Simultaneous Travel (FAST) in up to 6 axes at the same time, supported by 8 individual motors. This feature, along with the AIM (Artificial Intelligence Mapping) feature, is designed to calculate the shortest, fastest and safest path from one position to the next in a safe and efficient way. These features are present in the predicate Ysio, but improvements in the software performance allow for simultaneous movements in multiple axes in the subject device. This increases workflow efficiency in positioning the equipment for different examinations. | Increases workflow efficiency in positioning the equipment for different examinations. | Calculates paths for changing radiographic equipment position. | Calculates paths for changing radiographic equipment position. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2754 | Zio ECG Utilization Service | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The Zio ECG Utilization Service (ZEUS) System is intended to analyze and report symptomatic and asymptomatic cardiac events and continuous electrocardiogram information for long-term monitoring. | Support diagnoses. It downloads, stores, analyzes and aggregates the electrocardiogram (ECG) data for a Certified Cardiographic Technician (CCT) to review and generate a report of the findings contained within the data. This enables the provision of a complete ECG processing and analysis service. Automated ECG analysis performance was quantified for any claimed analysis metrics. The resulting statistics demonstrate sensitivity and positive predictivity levels which satisfy requirements and minimize safety or efficacy concerns. | After patient monitoring by Zio XT or Zio AT Patch, a final report is generated based on the beat-to-beat information from the entire ECG recording. It is indicated for use on patients 18 years or older who may be asymptomatic or who may suffer from transient symptoms such as palpitations, shortness of breath, dizziness, light-headedness, pre-syncope, syncope, fatigue, or anxiety and patients who are asymptomatic. The reports are provided for review by the intended user to render a diagnosis based on clinical judgment and experience. It is not intended for use on critical care patients. | After patient monitoring by Zio XT or Zio AT Patch, a final report is generated based on the beat-to-beat information from the entire ECG recording. It is indicated for use on patients 18 years or older who may be asymptomatic or who may suffer from transient symptoms such as palpitations, shortness of breath, dizziness, light-headedness, pre-syncope, syncope, fatigue, or anxiety and patients who are asymptomatic. The reports are provided for review by the intended user to render a diagnosis based on clinical judgment and experience. It is not intended for use on critical care patients. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3131 | SOMATOM go.Up; SOMATOM go.Now; SOMATOM go.All; SOMATOM go.Top; SOMATOM go.Sim; SOMATOM go.Open Pro; SOMATOM | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | This computed tomography system is intended to generate and process cross-sectional images of patients by computer reconstruction of x-ray transmission data. | The images delivered by the system can be used by a trained staff as an aid in diagnosis, treatment and radiation therapy planning as well as for diagnostic and therapeutic interventions. | The images delivered by the system can be used by a trained staff as an aid in diagnosis, treatment and radiation therapy planning as well as for diagnostic and therapeutic interventions. | The images delivered by the system can be used by a trained staff as an aid in diagnosis, treatment and radiation therapy planning as well as for diagnostic and therapeutic interventions. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3205 | 3M/Solventum 360 Encompass Computer Assisted Coding - Auto Suggestion | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Improving medical coding workflow processes. | Increasing the efficiency of medical coders when reviewing/validating the appropriate ICD-10 CM, CPT, and HCPCS codes for encounters based on associated clinical documentation. | Suggested ICD-10 CM, CPT, and HCPCS codes for a medical coder to select based on associated clinical documentation. | Suggested ICD-10 CM, CPT, and HCPCS codes for a medical coder to select based on associated clinical documentation. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3213 | Vivid iq | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The Vivid iq system is a general-purpose, Track 3, diagnostic ultrasound device, primarily intended for cardiovascular diagnostic use and shared service imaging. It is intended for use by qualified and trained Healthcare professionals for Ultrasound imaging, measurement, display and analysis of the human body and fluid. | Improve ultrasound workflows. | Measurements of areas of interest. | Measurements of areas of interest. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3295 | EchoPAC Software Only, EchoPAC Plug-In | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | EchoPAC Software Only / EchoPAC Plug-in provides image processing, annotation, analysis, measurement, report generation, communication, storage and retrieval functionality of ultrasound images that are acquired via the GE Healthcare Vivid family of ultrasound systems, as well as DICOM images from other ultrasound systems. It is intended for diagnostic review and analysis of ultrasound images, patient record management and reporting, for use by, or on the order of a licensed physician. | Improve ultrasound workflows. | Image processing, annotation, analysis, measurement, report generation, communication, storage and retrieval functionality of ultrasound images. | Image processing, annotation, analysis, measurement, report generation, communication, storage and retrieval functionality of ultrasound images. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3414 | Densitas Density AI | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Densitas densityai™ is a software application intended for use with compatible full field digital mammography and digital breast tomosynthesis systems. Densitas densityai™ provides an ACR BI-RADS Atlas 5th Edition breast density category to aid interpreting physicians in the assessment of breast tissue composition. Densitas densityai™ produces adjunctive information. It is not a diagnostic aid. | Automated breast density calculation. | The software processes the data according to proprietary algorithms and generates a Breast Density Grade in accordance with the American College of Radiology’s Breast Imaging Reporting and Data | The software processes the data according to proprietary algorithms and generates a Breast Density Grade in accordance with the American College of Radiology’s Breast Imaging Reporting and Data | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3455 | Riverain ClearRead CT | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | ClearRead computed tomography (CT) is a dedicated post-processing application that generates a secondary vessel suppressed Lung CT series with computer-aided detection (CADe) marks and associated region descriptors intended to aid the radiologist in the detection of pulmonary nodules. | ClearRead CT™ is comprised of computer assisted reading tools designed to aid the radiologist in the detection of pulmonary nodules during review of CT examinations of the chest on an asymptomatic population. The ClearRead CT requires that both lungs are in the field of view. ClearRead CT provides adjunctive information and is not intended to be used without the original CT series. | System (BI-RADS) 5th edition density classification scale. | System (BI-RADS) 5th edition density classification scale. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3537 | Transpara Breast Care | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Transpara software is intended for use as a concurrent reading aid for physicians interpreting full-field digital mammography exams (FFDM) and digital breast tomosynthesis (DBT) exams from compatible FFDM and DBT systems, to identify regions suspicious of breast cancer and assess their likelihood of malignancy. | Supporting mammography and digital breast tomosynthesis exams: it is designed to be used by physicians to improve interpretation of full-field digital mammography (FFMD) and digital breast tomosynthesis (DBT), improve detection and characterization of abnormalities, and enhance workflow. | The output of the device includes locations of calcifications groups and soft-tissue regions, with scores indicating the likelihood that cancer is present, and an exam score indicating the likelihood that cancer is present in the exam. Patient management decisions should not be made solely on the basis of analysis by Transpara. | The output of the device includes locations of calcifications groups and soft-tissue regions, with scores indicating the likelihood that cancer is present, and an exam score indicating the likelihood that cancer is present in the exam. Patient management decisions should not be made solely on the basis of analysis by Transpara. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3541 | XIDF-AWS801, Angio Workstation (Alphenix Workstation), V9.5 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The XIDF-AWS801, Angio Workstation (Alphenix Workstation), V9.5 is used for image input from the Diagnostic Imaging System and Workstation, image processing, and display. The Angio Workstation (XIDF-AWS801) is used in combination with an interventional angiography system (Alphenix series systems, Infinix-i series systems and INFX series systems) to provide 2D and 3D imaging of selective catheter angiography procedures for the whole body (including heart, chest, abdomen, brain, and extremity). | Supports angiography imaging workflows. | Automatically stabilizes device during imaging. | Automatically stabilizes device during imaging. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3705 | Workflow Box | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Workflow Box is a data routing and image processing tool which automatically applies contours to data that is sent to one or more of the included image processing workflows. Contours generated by Workflow Box may be used as an input to clinical workflows, including, but not limited to, radiation therapy treatment planning. Workflow Box must be used in conjunction with appropriate software to review and edit results generated automatically by Workflow Box components. For example, image visualization software must be used to facilitate the review and edit of contours generated by Workflow Box component applications. Workflow Box is intended to be used by trained medical professionals. Workflow Box is not intended to automatically detect lesions. | Accelerates radiology workflows: Workflow Box is a software application that enables the routing of image data and structures to automatic image processing workflows, including atlas based contouring, image registration based re-contouring and machine learning based contouring. Workflow Box data routing and contouring workflows support CT, MR and RTSTRUCT image data and structures. Workflow Box supports the routing of data to and from DICOM nodes within a hospital network. | Workflow Box is a data routing and image processing tool which automatically applies contours to data which is sent to one or more of the included image processing workflows. | Workflow Box is a data routing and image processing tool which automatically applies contours to data which is sent to one or more of the included image processing workflows. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3906 | eCaremanager (Philips) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | eCaremanager is a critical integrative software program purchased from Philips by the VA National TeleCritical Care. The program's inputs include ICU patient care data-physiologic, biochemical, and microbiologic information from VISTA and as well as inputs from the CDW, analyzes the data to create trend and threshold alarms and acuity scores that are used by TeleCritical Care Providers to comanage ICU patients with bedside providers. This data integration, portrayal, and analysis expands the TeleCritical Care providers' clinical capabilities and permits delivery of critical care services at a population rather than an individual level. This solves the problem of providing population critical care by actively identifying patient biochemical and physiologic derangements to allow TeleCritical Care Providers to assess and manage their care. | Enhancing patient outcomes | Alerts and alarms for biochemical and physiologic derangements | Alerts and alarms for biochemical and physiologic derangements | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3951 | Voluson Expert 22, Voluson Expert 20, Voluson Expert 18 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The systems are full-featured Track 3 ultrasound systems, primarily for general radiology use and specialized for OB/GYN with particular features for real-time 3D/4D acquisition. They consist of a mobile console with keyboard control panel; color LCD/TFT touch panel, color video display and optional image storage and printing devices. They provide high performance ultrasound imaging and analysis and have comprehensive networking and DICOM capability. They utilize a variety of linear, curved linear, matrix phased array transducers including mechanical and electronic scanning transducers, which provide highly accurate real-time three-dimensional imaging supporting all standard acquisition modes. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-enabled-medical-devices The systems are all intended for diagnostic ultrasound imaging and fluid flow analysis. | Accelerate ultrasound workflows. | Recommended image contours. | Recommended image contours. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4107 | CareLink Home Monitoring | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Address high frequency of false positive alerts in patients with implanted Medtronic electrocardiogram (EKG) loop recorders | Minimize work related to the review of false positive alert transmissions. | Filters out false positive loop recorder transmissions: the CareLink home monitoring system collects data, transmitted by patients with implanted cardiac monitors from their homes, for review by clinicians on a password protected Web page. Transmissions are made by the monitors when certain criteria are met, suggesting an important heart rhythm abnormality is present. A proprietary industry-created AI system evaluates the transmitted data and classifies the findings that triggered the transmissions as true arrhythmias or false positives. The home monitoring system for Carelink Reveal Linq and Linq II uses AI deep learning algorithms, flowing into the CareLink network, to remove false Atrial fibrillation (AFib) and false pause episodes. | Filters out false positive loop recorder transmissions: the CareLink home monitoring system collects data, transmitted by patients with implanted cardiac monitors from their homes, for review by clinicians on a password protected Web page. Transmissions are made by the monitors when certain criteria are met, suggesting an important heart rhythm abnormality is present. A proprietary industry-created AI system evaluates the transmitted data and classifies the findings that triggered the transmissions as true arrhythmias or false positives. The home monitoring system for Carelink Reveal Linq and Linq II uses AI deep learning algorithms, flowing into the CareLink network, to remove false Atrial fibrillation (AFib) and false pause episodes. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-413 | GE Portable Critical Care Suite 2.x | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Critical Care Suite is a software module that employs AI‐based image analysis algorithms to identify pre‐specified critical findings (pneumothorax) in frontal chest X‐ray images and flag the images in the PACS/workstation to enable prioritized review by the radiologist. | Triage pneumothorax cases. It identifies pre‐specified critical findings (pneumothorax) in frontal chest X‐ray images and flags the images in the PACS/workstation to enable prioritized review by the radiologist. | Notification that pneumothorax may be present. | Notification that pneumothorax may be present. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4152 | Stratification Tool for Opioid Risk Mitigation (STORM) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Patients exposed to opioid drugs often have complex medical needs and are at risk of multiple negative outcomes, including overdose, suicide, development of addiction, and other behavioral health challenges. Numerous strategies have been developed to minimize risks and treat conditions that underly them, but coordination of care across patient treatment providers and conditions can be challenging and interventions to reduce risk can be time consuming. Decision support to systems are needed to ensure recognition of a patient’s current conditions and consideration and tracking of recommended interventions. Risk estimation is needed to ensure that the most complex and at-risk patients receive adequate clinical attention in time-constrained health care environments. | The effectiveness of the targeted prevention program was tested during a randomized staged roll-out of the targeted prevention program using a stepped wedge evaluation design. The targeted prevention program utilizes the STORM model to identify patients for interdisciplinary case review (i.e. by a team of clinicians which includes those with pain, behavioral health and recovery expertise). Inclusion in the mandate for team-based case review of patients, the predictive model estimated that "very high" risk of overdose or suicide events in the next year was associated with a significant 22% reduction in all-cause mortality within 4 months of inclusion (Strombotne et al., 2023). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9060407/ Inclusion in the case review program was also found to reduce all-cause mortality and reduce likelihood of opioid analgesic discontinuation, in the subpopulation of patients on long-term opioid analgesics (Li et al., 2023). | The STORM predictive model provides an estimate of the likelihood of an overdose, suicide event, or death in the next year, documented in a health care system. | The STORM predictive model provides an estimate of the likelihood of an overdose, suicide event, or death in the next year, documented in a health care system. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4156 | Aquilion ONE (TSX-306A/3) V10.12 with Spectral Imaging System, Vitrea Software Package, VSTP-001A | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | This device is indicated to acquire and display cross-sectional volumes of the whole body, including the head, with the capability to provide images of whole organs in a single rotation. Whole organs include, but are not limited to, brain, heart, pancreas, etc. | Reduces image noise. | Denoised images. | Denoised images. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4197 | Vereos PET/CT | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The positron emission tomography (PET)/computed tomography (CT) system is used for the purpose of detecting, localizing, diagnosing, staging, re-staging, and follow-up for monitoring therapy response of various diseases in oncology, cardiology and neurology. | Supports diagnosis and treatment planning. | The system provides tools for quantifying results from the CT and PET images and provides the means for a simplified review of the CT, PET, and fused images. The integration of the anatomical data from CT with the metabolic data from PET gives clinicians the visual information necessary to define the severity and the extent of the disease. | The system provides tools for quantifying results from the CT and PET images and provides the means for a simplified review of the CT, PET, and fused images. The integration of the anatomical data from CT with the metabolic data from PET gives clinicians the visual information necessary to define the severity and the extent of the disease. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4234 | REACH VET Suicide Risk Prediction and Recovery Engagement | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Suicide is a low frequency event, but clinical attention and intervention can reduce the risk of suicide and other adverse behavioral health outcomes. REACH VET offers a targeted suicide prevention approach to focus attention on Veterans with the greatest risk/need. This AI system is designed to support identification of Veterans at an elevated risk of suicide or other preventable adverse outcomes to support mitigation of risks, if indicated. | The REACH VET 2.0 model is expected to identify a population of Veterans at elevated risk of suicide and other adverse outcomes for review by providers. Providers conduct outreach to identified Veterans when appropriate. This model and the targeted prevention in which it is used (i.e. the REACH VET clinical program), is used to augment VHA's extensive clinical suicide prevention program. While VHA conducts universal screenings for suicide risk by asking patients structured questions regarding suicidality, this screening process does not identify all Veterans at elevated risk. The REACH VET model is intended to supplement clinical screening and assessments and support identification of changes in risk. The REACH VET model supplements clinical practices by identifying Veterans at statistical risk of suicide, given similarities in their health care data with prior suicide decedents. After identification, reevaluation of care and outreach are completed to support risk mitigation. This targeted prevention program was nationally implemented in VHA in 2017 and there was an evaluation of the effects of the REACH VET program. This evaluation found that REACH VET implementation was associated with greater treatment engagement, new safety plan documentation, and fewer mental health admissions, emergency department visits, and documented suicide attempts. Full findings of this program evaluation are available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8524305/. | The REACH VET 2.0 predictive model provides an estimate of the likelihood of dying of suicide in the next month for all active VHA Veterans. | The REACH VET 2.0 predictive model provides an estimate of the likelihood of dying of suicide in the next month for all active VHA Veterans. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4238 | Venue Fit | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Venue Fit is a general-purpose diagnostic ultrasound system intended for use by qualified and trained healthcare professionals to evaluate the body, by ultrasound imaging and fluid flow analysis. The Venue Fit is a compact, portable system with a small footprint. The system can be hand-carried using the integrated handle, placed on a horizontal surface (if kickstand is attached), attached to a mobile cart, or mounted on the wall. It has a high resolution color LCD monitor, with a simple, multi-touch user interface that makes the system intuitive. The system can be powered through an electrical wall outlet for long-term use or through an internal battery for a short time with full functionality and scanning. The Venue Fit utilizes a variety of linear, convex, and phased array transducers that provide high imaging capability, supporting all standard acquisition modes. Compatible biopsy kits can be used for needle-guidance procedures. The system is capable of displaying the patient's Electrocardiogram (ECG) trace, synchronized to the scanned image. This allows the user to view an image from a specific time of the ECG signal, used as an input for gating during scanning. The ECG signal can be input directly from the patient or as an output from an ECG monitoring device. ECG information is not intended for monitoring or diagnosis. | Supports ultrasound procedures. | The cNerve algorithm may help the user detect and track nerves during the scouting stage of a nerve block procedure, prior to inserting the needle to inject the anesthetic material. | The cNerve algorithm may help the user detect and track nerves during the scouting stage of a nerve block procedure, prior to inserting the needle to inject the anesthetic material. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4279 | Velacur | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Velacur is a portable device intended to measure the stiffness and attenuation of the liver in a non-invasive way, via measurement of liver tissue shear modulus and ultrasound attenuation. This is done by measuring the wavelength or wave speed of mechanically created shear aves within the organ of the patient. Attenuation is measured directly via the loss in power of the ultrasound beam. The device is designed to be used at the point of care, in clinics and hospitals. The device is used by a medical professional, an employee of the clinic/hospital. The activation unit is placed under the patient, while lying supine on an exam bed. The activation unit vibrates at frequencies of 40, 50, and 60 Hz, causing shear waves within the liver of the patient. The ultrasound transducer is placed on the patient’s skin, over the intercostal space, and is used to take volumetric scans of the liver while shear waves are occurring. The device includes two algorithms designed to help users detect good quality shear waves and identify liver tissue. From the scan data, the device calculates tissue stiffness and attenuation. Minor hardware and software changes were made to the device. The organ guide (cleared in K223287) was also extended to add more optional overlays on top of the liver overlay to help with optimizing the scan and training users to obtain adequate images. The significant change is the addition of a new output measure for Velacur, an ultrasound derived fat fraction (VDFF). | Support liver stiffness measurements. | The Velacur Determined Fat Fraction combines ultrasound attenuation and backscatter coefficient measurements. The device is indicated to determine liver tissue stiffness, attenuation, and Velacur Determined Fat Fraction in a non-invasive way. VDFF is not intended to be used in pediatric patients. This device's outputs are meant to be used in conjunction with other clinical indicators in order to aid in clinical management of patients with liver diseases, including hepatic steatosis. The device is intended to be used in a clinical setting and by trained medical professionals. | The Velacur Determined Fat Fraction combines ultrasound attenuation and backscatter coefficient measurements. The device is indicated to determine liver tissue stiffness, attenuation, and Velacur Determined Fat Fraction in a non-invasive way. VDFF is not intended to be used in pediatric patients. This device's outputs are meant to be used in conjunction with other clinical indicators in order to aid in clinical management of patients with liver diseases, including hepatic steatosis. The device is intended to be used in a clinical setting and by trained medical professionals. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4320 | Syngo Application Software | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The Syngo Application Software is a medical software for real-time viewing, image manipulation, 3D-visualization, communication, and storage of medical images and data on exchange media. It is used for diagnostic image viewing and post-processing and for viewing and post-processing during interventional procedures. | Accelerates radiology workflows. | Performs image fusion. | Performs image fusion. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4353 | Rapid AI | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Rapid is a software package that provides the visualization and study of changes in tissue using digital images captured by diagnostic imaging systems including CT (Computed Tomography) and MRI (Magnetic Image Resonance), as an aid to physician diagnosis. Rapid AI is designed to streamline medical image processing tasks that are time consuming and tiring in routine patient workups. | Streamline medical image processing tasks that are time consuming and tiring in routine patient workups. | Rapid AI provides tools for performing the following types of analysis: selection of acute stroke patients for endovascular thrombectomy, volumetry of thresholded maps, time intensity plots for dynamic time courses, measurement of mismatch between labeled volumes on co-registered image volumes, and large vessel density. | Rapid AI provides tools for performing the following types of analysis: selection of acute stroke patients for endovascular thrombectomy, volumetry of thresholded maps, time intensity plots for dynamic time courses, measurement of mismatch between labeled volumes on co-registered image volumes, and large vessel density. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4361 | Spine Planning (2.0), Elements Spine Planning, Elements Planning Spine | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Spine Planning is intended for pre- and intraoperative planning of open and minimally invasive spinal procedures. It displays digital patient images (computed tomography (CT), Cone Beam CT, Magnetic resonance (MR), X-ray) and allows measurement and planning of spinal implants, such as screws and rods. | Supports spine procedure planning. | AI/ML algorithms are used in Spine Planning for detection of landmarks on 2D images for vertebrae labeling and measurement and vertebra detection on Digitally Reconstructed Radiograph (DRR) images of 3D datasets for atlas registration (labeling of the vertebra). The AI/ML algorithm is a Convolutional Neuronal Network (CNN) developed using a Supervised Learning approach. The algorithm was developed using a controlled internal process that defines activities from the inspection of input data to the training and verification of the algorithm. | AI/ML algorithms are used in Spine Planning for detection of landmarks on 2D images for vertebrae labeling and measurement and vertebra detection on Digitally Reconstructed Radiograph (DRR) images of 3D datasets for atlas registration (labeling of the vertebra). The AI/ML algorithm is a Convolutional Neuronal Network (CNN) developed using a Supervised Learning approach. The algorithm was developed using a controlled internal process that defines activities from the inspection of input data to the training and verification of the algorithm. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4443 | RayStation 11B | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | RayStation is a software system for radiation therapy and medical oncology. It is a treatment planning system for planning, analysis and administration of radiation therapy and medical oncology treatment plans. | Supports radiation treatment planning. | Based on user input, RayStation proposes treatment plans. After a proposed treatment plan is reviewed and approved by authorized intended users, RayStation may also be used to administer treatments. | Based on user input, RayStation proposes treatment plans. After a proposed treatment plan is reviewed and approved by authorized intended users, RayStation may also be used to administer treatments. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4566 | QLAB Advanced Quantification Software | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The Philips QLAB Advanced Quantification Software System (QLAB) is designed to view and quantify image data acquired on Philips ultrasound systems. QLAB is available either as a stand-alone product that can function on a standard PC, a dedicated workstation, and on-board Philips' ultrasound systems. https://www.accessdata.fda.gov/cdrh_docs/pdf20/K200974.pdf | Supports ultrasound workflows. | Recommends measurements of areas of interest. | Recommends measurements of areas of interest. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4599 | Preventice BodyGuardian Remote Monitoring System | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The BodyGuardian System detects and monitors cardiac arrhythmias in ambulatory patients, when prescribed by a physician or other qualified healthcare professional. | Early detection of arrhythmias. | Transmits measurements to physician for review. | Transmits measurements to physician for review. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4607 | EPIQ Series Diagnostic Ultrasound System; Affiniti Series Diagnostic Ultrasound System | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The R-Trigger algorithm software feature on Philips EPIQ and Affiniti Ultrasound System is intended to support detection of R-wave peak (R-trigger) as an input to certain transthoracic echo (TTE) clinical applications, initially including AutoStrain LV, AutoEF, 2D Auto LV (collectively referred to as “AutoStrain”), and AutoMeasure applications. The R-trigger algorithm is expected to be implemented as workflow enhancement for transthoracic clinical applications on EPIQ and Affiniti Ultrasound Systems in the VM13 software release. The Auto-Measure and AutoStrain features, cleared on Philips EPIQ and Affiniti systems (K211597, K190913 resp.) support users during B-mode (2D), CW-, PW- and TDI-Doppler measurements by automating some of the measurements needed to complete a routine transthoracic echo (TTE) exam for adult patients. The current AutoMeasure and AutoStrain applications require input from physio signal (ECG), using electrodes, because the modules and their detectors work on single cardiac cycles, starting and ending with the End Diastole (ED), or more precisely, with delimiting R-trigger events as a good approximation. Determining the R-triggers is currently done on the EPIQ and Affiniti Ultrasound Systems by a dedicated physio board. To enable clinical users to be able to use AutoMeasure and AutoStrain application without the R-trigger (ECG based) input, R-trigger feature (non-ECG-based) has been developed. | Accelerate ultrasound workflows: the R-trigger algorithm is expected to be implemented as workflow enhancement for transthoracic clinical applications on EPIQ and Affiniti Ultrasound Systems in the VM13 software release. The Auto-Measure and AutoStrain features, cleared on Philips EPIQ and Affiniti systems (K211597, K190913 resp.) support users during B-mode (2D), CW-, PW- and TDI-Doppler measurements by automating some of the measurements needed to complete a routine transthoracic echo (TTE) exam for adult patients. | Automates some routine ultrasound measurements. The intended use of EPIQ Ultrasound Diagnostic System is diagnostic ultrasound imaging and fluid flow analysis of the human body, with the following indications for use: Abdominal, Cardiac Adult, Cardiac other (Fetal), Cardiac Pediatric, Cerebral Vascular, Cephalic (Adult), Cephalic (Neonatal), Fetal/Obstetric, Gynecological, Intraoperative (Vascular), Intraoperative (Cardiac), intra-luminal, intra-cardiac echo, Musculoskeletal (Conventional), Musculoskeletal (Superficial), Ophthalmic, Other: Urology, Pediatric, Peripheral Vessel, Small Organ (Breast, Thyroid, Testicle), Transesophageal (Cardiac), Transrectal, Transvaginal, Lung. Modes of operation include: B, M, PW Doppler, CW Doppler, Color Doppler, Color M Doppler, Power Doppler, and Harmonic Imaging. | Automates some routine ultrasound measurements. The intended use of EPIQ Ultrasound Diagnostic System is diagnostic ultrasound imaging and fluid flow analysis of the human body, with the following indications for use: Abdominal, Cardiac Adult, Cardiac other (Fetal), Cardiac Pediatric, Cerebral Vascular, Cephalic (Adult), Cephalic (Neonatal), Fetal/Obstetric, Gynecological, Intraoperative (Vascular), Intraoperative (Cardiac), intra-luminal, intra-cardiac echo, Musculoskeletal (Conventional), Musculoskeletal (Superficial), Ophthalmic, Other: Urology, Pediatric, Peripheral Vessel, Small Organ (Breast, Thyroid, Testicle), Transesophageal (Cardiac), Transrectal, Transvaginal, Lung. Modes of operation include: B, M, PW Doppler, CW Doppler, Color Doppler, Color M Doppler, Power Doppler, and Harmonic Imaging. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4648 | Persyst 14 EEG Review and Analysis Software | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Automatic electroencephalogram (EEG) measurement and seizure detection: Persyst 14 EEG Review and Analysis Software is intended for the review, monitoring and analysis of EEG recordings made by EEG devices using scalp electrodes and to aid neurologists in the assessment of EEG. | Provides EEG measurements and detects seizures. | Persyst 14 provides notifications for seizure detection, quantitative EEG and aEEG that can be used when processing a record during acquisition. These include an on screen display and the optional delivery of an email message. | Persyst 14 provides notifications for seizure detection, quantitative EEG and aEEG that can be used when processing a record during acquisition. These include an on screen display and the optional delivery of an email message. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4689 | OPTIS Mobile Next Imaging System, OPTIS Integrated Next Imaging System | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | With the Ultreon 1.0 software application, these systems perform Optical Coherence Tomography (OCT) imaging of coronary arteries using compatible Dragonfly imaging catheters. Resting Full-cycle Ratio (RFR), Fractional Flow Reserve (FFR), and Pd/Pa at rest physiological waveforms are also measured by the system to assess the severity of a coronary lesion by measuring the pressure drop across the lesion (distal vs proximal pressure). The physician may use the resting full-cycle ratio (RFR) or fractional flow reserve (FFR) parameter, along with the knowledge of patient history, medical expertise, and clinical judgment to determine if therapeutic intervention is indicated. | Support coronary artery scans. | Displays measurements of interest. | Displays measurements of interest. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4730 | O-arm O2 Imaging System | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The O-arm O2 Imaging System is a mobile X-ray system designed for 2D and 3D imaging for adult and pediatric patients weighing 60lbs or greater and have an abdominal thickness greater than 16cm. It is intended to be used to benefit physicians with 2D and 3D information of anatomic structures and objects with high X-ray attenuation, such as bony anatomy and metallic objects. | Supports X-ray workflows. | Spine Smart Dose feature leverages Machine Learning technology with existing O-arm images to achieve a reduction in dose on the O-arm O2 Imaging System. It is an algorithm designed to reduce the noise of 3D reconstructions acquired from fewer acquisitions so that clinically viable 3D images can be produced using fewer projections. | Spine Smart Dose feature leverages Machine Learning technology with existing O-arm images to achieve a reduction in dose on the O-arm O2 Imaging System. It is an algorithm designed to reduce the noise of 3D reconstructions acquired from fewer acquisitions so that clinically viable 3D images can be produced using fewer projections. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4771 | NeuroQuant | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | NeuroQuant is a medical device with a fully automated Magnetic resonance (MR) imaging post-processing software that provides automatic labeling, visualization, and volumetric quantification of brain structures and lesions from a set of MR images and returns segmented images and morphometric reports. | Accelerate magnetic resonance imaging (MRI) workflows. | Automatic labeling, visualization, and volumetric quantification of brain structures and lesions. | Automatic labeling, visualization, and volumetric quantification of brain structures and lesions. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4812 | MAGNETOM Vida; MAGNETOM Lumina; MAGNETOM Aera; MAGNETOM Skyra; MAGNETOM Prisma; MAGNETOM Prisma fit | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross-sectional images, spectroscopic images and/or spectra, as well as displays the internal structure and/or function of the head, body, or extremities | When interpreted by a trained physician, these images and/or spectra and the physical parameters derived from the images and/or spectra yield information that may assist in diagnosis. | Denoised images. | Denoised images. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4923 | 3D Quorum | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Identifies clinically relevant regions of interest through overlapping slices (3mm) to ensure no loss of 3D data. Reduces the number of 3D images to be reviewed by the Radiologist by 2/3. | Early breast cancer detection and enhances patient outcomes. | Generates 6mm 'SmartSlices' from the original high-resolution 3D data. Identifies clinically relevant regions of interest through overlapping slices (3mm) to ensure no loss of 3D data. | Generates 6mm 'SmartSlices' from the original high-resolution 3D data. Identifies clinically relevant regions of interest through overlapping slices (3mm) to ensure no loss of 3D data. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4935 | TeraRecon iNtuition-Structural Heart Module | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The iNtuition-Structural Heart Module is an iNtuition (K121916) based optional feature that employs all standard features offered by iNtuition. These standard features include convenient image manipulation tools (like drawing of region of interests), manual and automatic segmentation of structures, image assessment and measurement tools (linear, diameter, perimeter, angle, area and volume) and tools that support report creation, transmission and storage of reports in digital form, and tracking of historical information about the studies analyzed by the software. iNtuition Vessel analysis and calcium scoring features are utilized to support automatic and manual centerline extraction and analysis and calcium quantification. | Accelerate standard cardiology workflows. | Image segmentations and measurements. | Image segmentations and measurements. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5005 | QVCAD System | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | QVCAD is designed to process and display a projection image, using the ABUS image stack on a customer supplied computer, and monitor it for review, with certain areas highlighted and/or marked for attention. | Identify potentially suspicious findings in breast scans. | The QVCAD CAD engine employs several image pattern recognition processes and artificial neural networks to detect suspicious areas of breast tissue 5mm or more in diameter that have characteristics similar to breast lesions. It inspects every tissue location in the 3-D ABUS images it processes with dedicated algorithms. These algorithms characterize the volumetric region surrounding each location by automatically extracting features that have been determined to represent suspicious signs. These features include typical ultrasound image features described in the radiological literature, such as region boundary, margin, echo pattern, orientation, posterior enhancement, architectural distortions, etc. Using those features, the QVCAD CAD engine generates a score for each suspicious area to distinguish potential breast lesions from normal breast tissue. | The QVCAD CAD engine employs several image pattern recognition processes and artificial neural networks to detect suspicious areas of breast tissue 5mm or more in diameter that have characteristics similar to breast lesions. It inspects every tissue location in the 3-D ABUS images it processes with dedicated algorithms. These algorithms characterize the volumetric region surrounding each location by automatically extracting features that have been determined to represent suspicious signs. These features include typical ultrasound image features described in the radiological literature, such as region boundary, margin, echo pattern, orientation, posterior enhancement, architectural distortions, etc. Using those features, the QVCAD CAD engine generates a score for each suspicious area to distinguish potential breast lesions from normal breast tissue. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5017 | FFRangio | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | FFRangio analysis is intended to support the functional evaluation of coronary artery disease. The results of this analysis are provided as a supportive aid for qualified clinicians in the evaluation and assessment of the physiology of coronary arteries. | Support coronary artery analysis. | FFRangio uses standard angiographic images that are retrieved from the X-ray Imaging System (C-arm) in Digital Imaging and Communications in Medicine (DICOM) format. The user selects the images and, following the system prompts, marks key features on the images, including the target lesion, ostium location, main vessel, target vessel, and its side branches. The system then matches the corresponding vessels among the projections and generates a 3D computer model of the vessels. The 3D model is used for blood flow analysis and determination of the FFRangio. | FFRangio uses standard angiographic images that are retrieved from the X-ray Imaging System (C-arm) in Digital Imaging and Communications in Medicine (DICOM) format. The user selects the images and, following the system prompts, marks key features on the images, including the target lesion, ostium location, main vessel, target vessel, and its side branches. The system then matches the corresponding vessels among the projections and generates a 3D computer model of the vessels. The 3D model is used for blood flow analysis and determination of the FFRangio. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5046 | Quantra | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The Quantra software is designed to estimate breast composition categories by analyzing the distribution and texture of parenchymal tissue patterns, which can be responsible for the masking effect during mammographic reading. | Support mammographic image analysis. | The Quantra software reports a result for each subject, which is intended to aid radiologists in the assessment of breast tissue composition. The Quantra software produces adjunctive information, it is not a diagnostic aid. | The Quantra software reports a result for each subject, which is intended to aid radiologists in the assessment of breast tissue composition. The Quantra software produces adjunctive information, it is not a diagnostic aid. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5058 | FastStroke, CT Perfusion 4D | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | CT perfusion 4D is an image analysis software package, which allows the user to produce dynamic image data and generate information with regards to changes in image intensity over time. It supports the analysis of CT Perfusion images (in the head and body) after the intravenous injection of contrast and calculation of the various perfusion-related parameters (i.e. regional blood flow, regional blood volume, mean transit time and capillary permeability). | This software will aid in the assessment of the extent and type of perfusion, blood volume, and capillary permeability changes, which may be related to stroke or tumor angiogenesis and the treatment thereof. | It allows the user to produce dynamic image data and generate information with regards to changes in image intensity over time. The results are displayed in a user-friendly graphic format as parametric images. | It allows the user to produce dynamic image data and generate information with regards to changes in image intensity over time. The results are displayed in a user-friendly graphic format as parametric images. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5140 | Discovery MI Gen2 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The Discovery MI Gen2 PET/CT system is intended for CT attenuation corrected, anatomically localized PET imaging of the distribution of positron-emitting radiopharmaceuticals. It is intended to image the whole body, head, heart, brain, lung, breast, bone, the gastrointestinal and lymphatic systems, and other organs. The system is also intended for stand-alone, diagnostic CT imaging. | The introduction of the 30 cm configuration comes with several clinical benefits. Compared to other Discovery MI configurations, the higher AFOV coverage allows a patient to be scanned using fewer field of views and can result in scanning in shorter time. Additionally, sensitivity of the 30 cm system is higher compared to other configurations, like the Discovery MI 25 cm, which assists in dose reduction and better detectability of small lesions. | Produces attenuation corrected PET images. | Produces attenuation corrected PET images. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5169 | Genius AI | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Genius AI Detection is a computer-aided detection and diagnosis (CADe/CADx) software device. It is intended to be used with compatible digital breast tomosynthesis (DBT) systems to identify and mark regions of interest, including soft tissue densities (masses, architectural distortions and asymmetries) and calcifications in DBT exams from compatible DBT systems, and provide confidence scores that offer assessment for Certainty of Findings and a Case Score. The device intends to aid in the interpretation of digital breast tomosynthesis exams in a concurrent fashion, where the interpreting physician confirms or dismisses the findings during the reading of the exam. | Support mammographic image analysis. | For each detected lesion, Genius AI Detection produces CAD results that include the location of the lesion, an outline of the lesion, and a confidence score for that lesion. Genius AI Detection also produces a case score for the entire tomosynthesis exam. Genius AI Detection packages all CAD findings derived from the corresponding analysis of a tomosynthesis exam into a DICOM Mammography CAD SR object and distributes it for display on DICOM compliant review workstations. The interpreting physician will have access to the CAD findings concurrently to the reading of the tomosynthesis exam. In addition, a combination of peripheral information such as number of marks and case scores may be used on the review workstation to enhance the interpreting physician’s workflow by offering a better organization of the patient worklist. | For each detected lesion, Genius AI Detection produces CAD results that include the location of the lesion, an outline of the lesion, and a confidence score for that lesion. Genius AI Detection also produces a case score for the entire tomosynthesis exam. Genius AI Detection packages all CAD findings derived from the corresponding analysis of a tomosynthesis exam into a DICOM Mammography CAD SR object and distributes it for display on DICOM compliant review workstations. The interpreting physician will have access to the CAD findings concurrently to the reading of the tomosynthesis exam. In addition, a combination of peripheral information such as number of marks and case scores may be used on the review workstation to enhance the interpreting physician’s workflow by offering a better organization of the patient worklist. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5222 | CT CoPilot | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | CT CoPilot is intended for use in automating post-acquisition quantitative analysis of computed tomography (CT) images of the brain for patients aged 18 or older. CT CoPilot performs automatic reformatting, labeling, and quantification of segmentable structures from a set of CT images. | Accelerate image analysis workflows. This software is intended to automate the current manual process of identifying, labeling, and quantifying structures identified on CT images of the brain and to provide automated registration and reformatting of data. | The output of the software provides these values as numerical volumes and images which have been annotated with graphical color overlays, with each color representing a specific segmental structure. | The output of the software provides these values as numerical volumes and images which have been annotated with graphical color overlays, with each color representing a specific segmental structure. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5263 | Cranial Navigation, Navigation Software Cranial, Navigation Software Craniofacial, Cranial EM System, Automatic Registration iMRI | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Cranial EM is intended as an image-guided planning and navigation system to enable neurosurgery procedures. The device is indicated for any medical condition in which a reference to a rigid anatomical structure can be identified relative to images (Computed Tomography (CT), Computed Tomography Angiography (CTA), X-Ray, MR, Magnetic Resonance Angiography (MRA) and ultrasound) of the anatomy. | Support neurosurgery planning. | Cranial Navigation is an image guided surgery system for navigated treatments in the field of cranial surgery, including the newly added Craniofacial indication. It offers different patient image registration methods and instrument calibration to allow surgical navigation by using optical tracking technology. The device provides different workflows guiding the user through preoperative and intraoperative steps. Automatic Registration iMRI is an accessory to Cranial Navigation, enabling automatic image registration for intraoperatively acquired MR imaging. The registration object can be used in subsequent applications (e.g. Cranial Navigation 4.1). It consists of the software Automatic Registration iMRI 1.0, a registration matrix, and a reference adapter. Similarly, the Cranial EM System is an image-guided planning and navigation system to enable neurosurgical procedures. It offers instrument handling as well as patient registration to allow surgical navigation by using electromagnetic tracking technology. | Cranial Navigation is an image guided surgery system for navigated treatments in the field of cranial surgery, including the newly added Craniofacial indication. It offers different patient image registration methods and instrument calibration to allow surgical navigation by using optical tracking technology. The device provides different workflows guiding the user through preoperative and intraoperative steps. Automatic Registration iMRI is an accessory to Cranial Navigation, enabling automatic image registration for intraoperatively acquired MR imaging. The registration object can be used in subsequent applications (e.g. Cranial Navigation 4.1). It consists of the software Automatic Registration iMRI 1.0, a registration matrix, and a reference adapter. Similarly, the Cranial EM System is an image-guided planning and navigation system to enable neurosurgical procedures. It offers instrument handling as well as patient registration to allow surgical navigation by using electromagnetic tracking technology. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5427 | CellaVision DM1200 with the body fluid application | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | CellaVision DM 1200 with the body fluid application automatically locates and presents images of nucleated cells on cytocentrifuged body fluid preparations. The body fluid application is intended for a differential count of white blood cells. | Accelerate counting and recognition of blood cells. | The system suggests a classification for each cell. The operator verifies the classification and has the opportunity to change the suggested classification of any cell. | The system suggests a classification for each cell. The operator verifies the classification and has the opportunity to change the suggested classification of any cell. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5464 | Nediser Reports QA | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Nediser is a continuously trained artificial intelligence "radiology resident" that assists radiologists in confirming features in radiology images, including x-ray. It helps with workflow efficiency and quality. The program fills in the comparison date, identifies the joint for the appropriate template, performs measurements, and grades arthritis. The output is sent to the radiologist as a draft report. The radiologists use this to save time and improve accuracy. No output is sent to the EHRM or used by other providers. | - Promote standardization of radiology reports as currently their inherent unstructured and narrative style can be subjective, leading to variability. - Promote quantitative reporting - Reduce errors of transcription, such as laterality or no. of views, which the Radiologist has to manually dictate without this software. | The AI system always requires human in the loop and sends outputs into the radiology report draft. No output is send to the EHRM. The output draft includes comparison dates, measurements, and arthritis grading which the radiologists can review and edit as they finalize the report. | The AI system always requires human in the loop and sends outputs into the radiology report draft. No output is send to the EHRM. The output draft includes comparison dates, measurements, and arthritis grading which the radiologists can review and edit as they finalize the report. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5468 | CARTO 3 EP Navigation System Software V8.0 (FG-5400-00, FG-5400-00U) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The CARTO 3 EP Navigation System V8.0, is a catheter-based atrial and ventricular mapping system designed to acquire and analyze navigation catheter’s location and intracardiac ECG signals and use this information to display 3D anatomical and electroanatomical maps of the human heart. The intended use of the CARTO 3 System is catheter-based cardiac electrophysiological (EP) procedures. | Supports identification of and viewing heart activity patterns. | The CARTO 3 System provides information about the electrical activity of the heart and catheter location during the procedure. | The CARTO 3 System provides information about the electrical activity of the heart and catheter location during the procedure. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5501 | Care Assessment Needs (CAN) Score | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Identify patients at potential risk for 90 day hospitalization and 90 day mortality: CAN 3.0 is a pair of logistic regression models that risk stratify Veterans by their likelihood of 90-day mortality and 90-day hospitalization | Help providers care for patients - the outputs of the AI systems is an additional resource for care providers | Risk of 90-day hospitalization or 90-day death | Risk of 90-day hospitalization or 90-day death | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5509 | Cartesion Prime (PCD-1000A/3) V10.15 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The system is intended to acquire PET images of any desired region in the whole body and CT images of the same region (used for attenuation correction or image fusion) to detect the location of positron emitting radiopharmaceuticals in the body with the obtained images. | This information can assist with the research, detection, localization, evaluation, diagnosis, staging, restaging, and follow-up of diseases and disorders as well as their therapeutic planning and therapeutic outcome assessment. | Denoised images. | Denoised images. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5542 | VA CART SYNTAX Risk Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The VA CART SYNTAX risk model predicts 30-day post-Percutaneous Coronary Intervention (PCI) mortality, using a combination of clinical and anatomic variables. The model is available to facilitate personalized informed consent discussions and appropriate preparation for high-risk PCI cases. | More informed personalized discussions with patients: The model is available to facilitate personalized informed consent discussions and appropriate preparation for high-risk PCI cases | The calculated score indicates a risk of major adverse cardiovascular events (death, Myocardial infarction (MI), stroke, and repeat revascularization) over time. | The calculated score indicates a risk of major adverse cardiovascular events (death, Myocardial infarction (MI), stroke, and repeat revascularization) over time. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5583 | VA CART Mortality Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The VA CART mortality risk model is based on logistic regression using baseline clinical and procedural variables to predict post-PCI 30-day mortality. The model is available to facilitate risk stratification at the point of care. | More informed personalized discussion with the patient - the outputs of the AI system is an additional resource for care providers | The probability of post-Percutaneous Coronary Intervention (PCI) 30-day mortality | The probability of post-Percutaneous Coronary Intervention (PCI) 30-day mortality | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5624 | VA CART Nephropathy Risk Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The VA CART nephropathy risk model is based on logistic regression using baseline clinical and procedural variables to predict Post-Percutaneous Coronary Intervention (PCI) acute kidney injury (AKI). The model is available to facilitate risk stratification at the point of care. In operations and maintenance. | More informed personalized discussions with patients | Probability of post-PCI AKI | Probability of post-PCI AKI | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5632 | Brainlab Elements Image Fusion, Contouring (4.5);Image Fusion (4.5);Fibertracking (2.0);BOLD MRI Mapping (1.0);Image Fusion Angio (1.0) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Brainlab Elements are software applications indicated for the processing of medical image data to support the intended user group to perform image guided surgery and radiation treatment planning. The Brainlab Elements 6.0 are applications that transfer Digital Imaging and Communications in Medicine (DICOM) data to and from picture archiving and communication systems (PACS) and other storage media devices. They include modules for 2D & 3D image viewing, image processing, image co-registration, image segmentation and a 3D visualization of medical image data for treatment planning procedures. | Accelerate imaging workflows. | Tumor segmentation estimates. | Tumor segmentation estimates. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5714 | BioPlex 2200 ANA Screen with Medical Decision Support Software for Use with BioPlex 2200 Multi-Analyte Detection System | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The BioPlex 2200 Medical Decision Support Software (MDSS) is a pattern recognition algorithm that can enhance the performance of the Antinuclear Antibody (ANA) Screening associated diagnostic patterns, among its multiple assay results. | Support antibody screening procedures. | The MDSS can suggest one or more possible disease associations after identifying patterns from antibody results. | The MDSS can suggest one or more possible disease associations after identifying patterns from antibody results. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5788 | VA CART Bleeding Risk Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The VA CART bleeding risk model is based on logistic regression, using baseline clinical and procedural variables to predict post-Percutaneous Coronary Intervention (PCI) in-hospital bleeding events. The model is available to facilitate risk stratification at the point of care. It is in operations and maintenance. | The model is available to facilitate risk stratification at the point of care. The outputs of the AI systems is an additional resource for care providers. | Probability of post-PCI bleed prior to hospital discharge | Probability of post-PCI bleed prior to hospital discharge | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5796 | Biograph Vision, Biograph MCT Family Of PET/CTs | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The Biograph Vision and Biograph mCT systems provide registration and fusion of high-resolution metabolic and anatomic information from the two major components of each system (Positron emission tomography (PET) and computed tomography (CT)). These systems are designed for whole body oncology, neurology and cardiology examinations. | Accelerate imaging workflows. | Aligned and fused images. | Aligned and fused images. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5837 | AutoContour Model RADAC V3 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | AutoContour is intended to assist radiation treatment planners in contouring and reviewing structures within medical images in preparation for radiation therapy treatment planning. | Accelerate radiation therapy treatment planning workflows. | Generates contours for radiation therapy planning. | Generates contours for radiation therapy planning. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5960 | Aplio i900, Aplio i800 and Aplio i700 Software V8.1 Diagnostic Ultrasound System | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The Diagnostic Ultrasound System Aplio i900 Model TUS-AI900, Aplio i800 Model TUS-AI800, Aplio i700 Model TUS-AI700 are indicated for the visualization of structures and dynamic processes within the human body, using ultrasound, and to provide image information for diagnosis in the following clinical applications: fetal, abdominal, intra-operative (abdominal), pediatric, small organs (thyroid, breast and testicle), trans-vaginal, trans-rectal, neonatal cephalic, adult cephalic, cardiac (both adult and pediatric), peripheral vascular, transesophageal, musculo-skeletal (both conventional and superficial), laparoscopic and Thoracic/Pleural. This system provides high-quality ultrasound images in the following modes: B mode, M mode, Continuous Wave, Color Doppler, Pulsed Wave Doppler , Power Doppler and Combination Doppler, as well as Speckle-tracking, Tissue Harmonic Imaging, Combined Modes, Shear wave, Elastography, and Acoustic attenuation mapping. This system is suitable for use in hospital and clinical settings by physicians or legally qualified persons who have received the appropriate training. In addition to the aforementioned indications for use, when the Endoscopic Ultrasound (EUS) transducer GF-UCT180 is connected, Aplio i800 Model TUS-AI800/E3 provides image information for the diagnosis of the upper gastrointestinal tract and surrounding organs. | Accelerate ultrasound workflows. | Generates automated measurements of imaged structures. | Generates automated measurements of imaged structures. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-6001 | Alignment System Cranial, Alignment Software Cranial, Cirq Alignment Software Cranial Biopsy, Cirq Alignment Software Cranial sEEG, Varioguide Alignment Software Cranial | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Alignment System Cranial is intended to support the surgeon in planning and achieving a trajectory with surgical instruments during cranial stereotactic procedures. The indications for use are biopsy of intracranial lesions and placement of stereoelectroencephalography (SEEG) electrodes. | Accelerate surgery planning workflows. | This Machine Learning (ML) based functionality is used as an aid in the registration step (in surface matching) by allowing a pre-registration based on guide points (which are delivered by this algorithm). This pre-registration step is not mandatory. | This Machine Learning (ML) based functionality is used as an aid in the registration step (in surface matching) by allowing a pre-registration based on guide points (which are delivered by this algorithm). This pre-registration step is not mandatory. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-6042 | Acumen Hypotension Prediction Index - EV1000 Clinical Platform, Acumen Hypotension Prediction Index - Hemosphere Advanced Monitoring Platform, Acumen Hypotension Prediction Index, Hemosphere Advanced Monitoring Platform - Pressure | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The Edwards Lifesciences Acumen Hypotension Prediction Index feature provides the clinician with physiological insight into a patient’s likelihood of future hypotensive events (defined as mean arterial pressure < 65 mmHg for at least one minute in duration) and the associated hemodynamics. The Acumen HPI feature is intended for use in surgical or non-surgical patients receiving advanced hemodynamic monitoring. The Acumen HPI feature is considered to be additional quantitative information regarding the patient’s physiological condition for reference only and no therapeutic decisions should be made based solely on the Hypotension Prediction Index (HPI) parameter. | Support hypotension monitoring. | Provides a prediction of patient's likelihood of future hypotensive events. | Provides a prediction of patient's likelihood of future hypotensive events. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1506 | Continuous Glucose Monitoring Summary | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | Reduce work hours associated with manually scouring Ambulatory Glucose Profile metrics and reduce transcription errors while copying and pasta data between medical tools and platforms. | Provide a holistic view of patient data and improve medical providers' efficiency in caring for patients. | Summarized Ambulatory Glucose Profile metrics over a 14-day period in a Power BI dashboard that is accessible from Virtual Care Manager, Clinical Data Services, and Electronic Health Record. | Summarized Ambulatory Glucose Profile metrics over a 14-day period in a Power BI dashboard that is accessible from Virtual Care Manager, Clinical Data Services, and Electronic Health Record. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1891 | CarestreamCS 9600 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | CS 9600 is an extraoral system intended to produce two-dimensional and three-dimensional digital X-ray images of the dento-maxilofacial, ENT (Ear, Nose and Throat), cervical spine and wrist regions at the direction of healthcare professionals, as diagnostic support for pediatric and adult patients. | Accelerate dental imaging workflows. | Provides recommended alignment for dental imaging system. | Provides recommended alignment for dental imaging system. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2045 | No Missed Malignancy (NMM) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Enhance clinical decision support capabilities and provide information not easily obtained through existing systems so that abnormal radiology findings are acted upon promptly—reducing risk of delayed cancer diagnoses and aligning with federal standards for patient safety and clinical accountability. The model has been successfully deployed to production. | By summarizing pending and processed alerts and expanding support to additional stations, this solution can increase efficiency and provide clinicians with critical information not easily accessible through existing CPRS/VistA systems | PowerBI report | PowerBI report | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2168 | Sleep Apnea | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Improve Veteran care by increasing access to diagnosis and treatment of Veterans at high risk for sleep apnea, a condition that is not recognized in an estimated 80-90% of people who have it but poses significant negative impact on quality of life, productivity, sleep satisfaction, daytime fatigue, motor vehicle accidents, hypertension, and possibly other comorbidities(associations via observational trials of sleep apnea and cardiovascular morbidity and mortality) The AI solution predicts a patient's risk of Sleep Apnea. Clinicians use this information to increase access to diagnosis and treatment of patients at high risk for sleep apnea. A data pipeline, not a person, gathers the data and a LLM summarizes the information. Outputs are graded by humans to determine accuracy. | Increased efficiency in delivering healthcare. | Power BI Reports predicting a patients risk of Sleep Apnea. | Power BI Reports predicting a patients risk of Sleep Apnea. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2209 | Coordinated Care Tracking System (CCTS) Generative AI Support | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | To support the CCTS application we have created a generative AI pipeline that goes through all of the radiology and pathology notes for a day and extracts all references to all cancer related findings (tumors, nodules, masses, etc) and extracts all of the characteristics and comments of the findings. These extracts will then be integrated in the CCTS application to ensure that proper diagnostic codes have been added to the scans so that a dangerous tumor is not missed. There is no fine-tuning with this system. Extracts from the notes are graded by humans to assess accuracy. This will provide insights to inform case follow up, scalable computing power for the CCTS clinical application. | Increased efficiency in healthcare delivery. | CCTS application | CCTS application | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2516 | Siemens Sequoia Crown | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The ACUSON Sequoia and Sequoia Select ultrasound imaging systems are intended to provide images of, or signals from, inside the body by an appropriately trained healthcare professional in a clinical setting for the following applications: Fetal, Abdominal, Pediatric, Neonatal Cephalic, Small Parts ,Obstetrics & Gynecology (OB/GYN) (useful for visualization of the ovaries, follicles, uterus and other pelvic structures), Cardiac, Pelvic, Vascular, Adult Cephalic, Musculoskeletal and Peripheral Vascular applications. | Accelerate ultrasound workflows. | Provides automated measurements of imaged structures. | Provides automated measurements of imaged structures. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-4280 | Philips 7300 C | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | AI-assisted Smart Collimation Thorax algorithm, developed by Philips, adjusts detector height and proposes collimation for each chest upright patient individually, based upon data from a 3D Camera. | Smart Collimation Thorax reduces exam time by up to 35 seconds, resulting in potential daily time savings of 20 minutes for the medical team. | Recommended settings for imaging process. | Recommended settings for imaging process. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-4673 | ARANZ Medical Silhouette Wound Management System | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Evaluate parameter, depth, and area of a wound (image): The ARANZ Medical Silhouette system is employed in the VA for monitoring and tracking wounds. | Enhance patient outcome by providing non-biased wound metrics and healing data. Automatically detecting wounds in this manner can save the user time and improve the consistency of wound detection and the measurements that result. | Charts and graphs representing wound progression and healing: Silhouette employs Machine Learning (ML) algorithms to automatically detect the wound boundary in an image of a wound that has been captured by the user. | Charts and graphs representing wound progression and healing: Silhouette employs Machine Learning (ML) algorithms to automatically detect the wound boundary in an image of a wound that has been captured by the user. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-4960 | VASQIP Mortality Calculator | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Tracking Surgical Outcomes: Collecting data on surgical procedures and outcomes, such as complications, morbidity, and mortality rates. Benchmarking Performance: Comparing surgical outcomes at VA facilities against national standards and best practices to identify areas where performance can be improved. Quality Improvement Initiatives: Implementing targeted interventions and quality improvement projects based on the analysis of collected data to enhance patient care. Reporting and Accountability: Providing regular reports on surgical outcomes and quality metrics to promote transparency and accountability within the VA healthcare system. | Increase Veteran safety for surgical procedures. Enhance patient outcomes by rigorously collecting and analyzing surgical data, identifying areas for improvement, and implementing evidence-based practices. | The Veterans Affairs Surgical Quality Improvement Program (VASQIP) uses the Observed-to-Expected (O/E) ratio as a key metric to evaluate surgical outcomes. The O/E ratio is a statistical measure that compares the actual (observed) outcomes of surgical procedures to the expected outcomes based on national benchmarks and risk-adjusted models. | The Veterans Affairs Surgical Quality Improvement Program (VASQIP) uses the Observed-to-Expected (O/E) ratio as a key metric to evaluate surgical outcomes. The O/E ratio is a statistical measure that compares the actual (observed) outcomes of surgical procedures to the expected outcomes based on national benchmarks and risk-adjusted models. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5001 | Frailty Risk Analysis | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Define frailty in surgical patients: screen for potential frailty in pre-surgical patients | Alignment of recommended therapies with patient goals. There is a single threshold for a Risk Analysis Index (RAI) score over which severe frailty is considered to exist and that prompts clinicians to have additional discussions of goals of care with Veterans. There is no required action or mandated restriction of care. | Risk analysis index score of frailty. | Risk analysis index score of frailty. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5189 | Butterfly iQ3 Ultrasound System | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The Butterfly iQ3 Ultrasound System is indicated for use by qualified and trained healthcare professionals to enable diagnostic ultrasound imaging and measurement of anatomical structures and fluids of adult and pediatric patients for the following clinical applications: Peripheral Vessel (including carotid, deep vein thrombosis and arterial studies), Small Organs (including thyroid, scrotum and breast), Cardiac, Abdominal, Lung, Procedural Guidance, Urology, Fetal/Obstetric, Gynecological, Musculoskeletal (conventional), Musculoskeletal (superficial) and Ophthalmic. | Accelerates ultrasound workflows. | The purpose of the Auto B-line Counter is to provide automated detection and automatic calculation of the number of B-lines to a user in a given rib space and also provides the users the capabilities of reviewing the detected B-lines (via visual overlays). The overlay of B-lines does not mark images for detection of specific pathologies. The Auto B-line Counter enables the automated identification and count of B-lines during a lung scan and is integrated into the existing Butterfly iQ/iQ+ mobile application for use with the Butterfly iQ or iQ+ transducers. | The purpose of the Auto B-line Counter is to provide automated detection and automatic calculation of the number of B-lines to a user in a given rib space and also provides the users the capabilities of reviewing the detected B-lines (via visual overlays). The overlay of B-lines does not mark images for detection of specific pathologies. The Auto B-line Counter enables the automated identification and count of B-lines during a lung scan and is integrated into the existing Butterfly iQ/iQ+ mobile application for use with the Butterfly iQ or iQ+ transducers. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5313 | CareView Patient Safety System | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | It monitors patient movement to watch out for fall risks. | Enhance patient outcomes and safety | Sends alert to central nurse station when assistance is needed | Sends alert to central nurse station when assistance is needed | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-6290 | GE Omni Legend | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The Omni Legend positron emission tomography (PET)/computed tomography scan (CT) system is intended for CT attenuation corrected, anatomically localized PET imaging of the distribution of positron-emitting radiopharmaceuticals. It is intended to develop an image of the whole body, head, heart, brain, lung, breast, bone, gastrointestinal and lymphatic systems, and other organs. The system is also intended for stand-alone, diagnostic CT imaging. | Accelerates imaging workflows. | Denoised images. | Denoised images. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1815 | Analytics, Data, and Decision Support Unified Platform (ADDSUP) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Reduce unmanaged and inefficient spend and duplicative contracts. Provide proper analytics of VA spend and Data Sharing across the agency. | - Increased Spend Under Management and improved cost avoidance ($20B realized so far). - Improved acquisition time and quality of acquisition packages. | Data analytics resulting from the canvasing of over 900K acquisition documents. Prescriptive analytics identifying proper acquisition strategies. | Data analytics resulting from the canvasing of over 900K acquisition documents. Prescriptive analytics identifying proper acquisition strategies. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1020 | AI Microlearning (OttoLearn) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | The VA Acquisition Academy’s (VAAA) Acquisition Intern Program (AIP) trains 1102 contract specialists. The 12-month program (including one-month post training support) consists of face-to-face and virtual classroom training at the VA Acquisition Academy and On-The-Job Training (OJT) at contracting sites across the United States. Periods of academic training are followed by longer periods of OJT. Unfortunately, due to the dynamic nature of the acquisition field offices, interns are often not able to focus on the acquisition lifecycle steps and content they learned during the previous classroom training block. With the introduction of a comprehensive national certification exam (FAC-C/P exam) in 2023, we needed to help our interns better retain the information they learned during the academic training blocks while out on OJT. This use case is intended to combat learning loss during long periods of OJT, which may have a severely negative impact on interns’ ability to pass the FAC-C/P certification exam. | Increased first time pass rate of taking the FAC-C/P certification exam. During the pilot year, the interns that used the software the most passed the certification exam at a rate of 94%, while those in the same class who used the software the least passed at a rate of 76%. | The only outputs are the training use outputs (usage rates, mastery levels, knowledge gaps, etc.) | The only outputs are the training use outputs (usage rates, mastery levels, knowledge gaps, etc.) | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1897 | VA Section 508 Office URL Ownership Prediction Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Inaccurate identification of URL ownership for webpages and documents leads to inefficiencies in remediation processes, delays in addressing issues, and gaps in monitoring and reporting. These issues ultimately undermine effective management and compromise the quality of electronic products for Veterans and their beneficiaries. | The AI system reduces the level of effort and turnaround time for the OIT 508 Office to curate URLs and coordinate with VA Administrations to review and, as necessary, remediate URLs for accessibility compliance. In the absence of the AI system both the 508 Office and VA Administrations would be required to undertake a lengthy and iterative process to identify the VA Administration that is the formal owner of a URL. | The AI Model has two outputs. 1) Agency Owner with responses as: VHA, VBA, NCA, VACO, OIT, OCTO, and Unknown. 2) Prediction Score ranging from 0 to 1. The system is comprised of manually executed scripts using the AI model on local GFE. Besides the AI model, the scripts additionally apply manually constructed business rules and create prepopulated excel files for each VA Administration to use for quarterly reviews and data entry. | The AI Model has two outputs. 1) Agency Owner with responses as: VHA, VBA, NCA, VACO, OIT, OCTO, and Unknown. 2) Prediction Score ranging from 0 to 1. The system is comprised of manually executed scripts using the AI model on local GFE. Besides the AI model, the scripts additionally apply manually constructed business rules and create prepopulated excel files for each VA Administration to use for quarterly reviews and data entry. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3086 | VA GPT | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | VA GPT is intended to improve VA employee efficiency, ultimately improving their service to Veterans. Employees are using to assist with basic administrative tasks (drafting emails, summarizing documents, summarizing meeting notes, etc.) | - Boost productivity e.g., get instant assistance with Writing, Research, and analysis. - Save time e.g., quickly reformat meeting notes and draft presentations with AI assistance. - Enhance creativity e.g., generate new ideas and perspectives to tackle challenges in innovative ways. | Outputs include generative AI responses to users' prompts. | Outputs include generative AI responses to users' prompts. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-6038 | Fusion 2024 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Assist users with low or no vision with interpreting graphs that were not properly provided alternative text descriptions. | Increased efficiency and cost savings as sighted assistance is not required to describe what is being displayed. | Descriptions of images and graphs. | Descriptions of images and graphs. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1676 | Summit AI Assistant | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | The AI is intended to reduce the time required to extract rich and relevant information from a corpus of documents pertaining to a specific business domain. A combination of keyword and vector search methods deliver precise and relevant information from a specific business domain's knowledge base. | The AI increases efficiency within a business process, especially in business processes that involve sifting through large amounts of information/knowledge captured in documents. | The outputs of the AI system are: 1. Responses to plain language queries submitted by a business user. 2. A 'chat' like experience with the data. | The outputs of the AI system are: 1. Responses to plain language queries submitted by a business user. 2. A 'chat' like experience with the data. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3234 | eDiscovery (VASI 2460): Predictive Coding | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Document review teams train an AI model to identify positive or negative content within a universe of documents. | Prioritizes and surfaces relevant documents early to speed review and improve accuracy. | Predicted ranking for remaining documents | Predicted ranking for remaining documents | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5411 | Veteran Experience Office Retention/Trust Models in Customer Experience Insights | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The models reside in Customer Experience Insights. The inputs are CDW VISTA data and CX Insights data. The outputs are predictions of veteran trust and retention given veteran experience inputs. The purpose of the models is to predict whether a veteran would say they trust the VA (Trust) and whether they would continue to receive care with VA (Retention) based on their experiences with VA. The intended use is to understand drivers of these important metrics. The problem to be solved is to programmatically identify drivers of veteran experience to inform methods of improving experience metrics such as retention and trust. | Improved veteran experience, improved veteran trust, improved veteran retention. | Predicted trust, predicted retention, driver importance metrics, driver impact metrics on output variables. | Predicted trust, predicted retention, driver importance metrics, driver impact metrics on output variables. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-967 | CHAMPVA Automation | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI solution is designed to eliminate the manual data entry burden associated with processing CHAMPVA applications. Each application previously required up to 61 administrative steps, contributing to a backlog of over 70,000 unprocessed applications and delays of up to 162 days. The AI automates data extraction from VA Form 10-10d PDFs and streamlines ingestion into the Claims Processing and Eligibility (CP&E) system, significantly reducing processing time and administrative workload. | Increased Efficiency: Automating data extraction and processing reduces time spent on manual tasks. Cost Savings: Reduces labor costs associated with manual data entry and backlog management. Improved Service Delivery: Accelerates application processing, allowing Veterans and their families to receive benefits faster. Enhanced Staff Productivity: Frees up staff to focus on eligibility determinations and higher-value tasks. Supports VA’s Mission: Strengthens the VA’s commitment to serving Veterans and their families with timely and effective healthcare benefits. | The AI system outputs structured data extracted from VA Form 10-10d PDF applications. This data is formatted into text files suitable for ingestion into the CP&E system. The outputs include: Applicant demographic and eligibility information Validated data records for benefit processing Automated alerts via VA Notify to inform applicants of status updates Data logs for tracking and auditing purposes These outputs enable downstream systems to process applications efficiently and support eligibility decision-making. | The AI system outputs structured data extracted from VA Form 10-10d PDF applications. This data is formatted into text files suitable for ingestion into the CP&E system. The outputs include: Applicant demographic and eligibility information Validated data records for benefit processing Automated alerts via VA Notify to inform applicants of status updates Data logs for tracking and auditing purposes These outputs enable downstream systems to process applications efficiently and support eligibility decision-making. | Yes - https://catalog.data.gov/dataset/civilian-health-and-medical-program-of-the-department-of-veterans-affairs-champva | Yes | ||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2545 | Privacy Act Automation | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | VBA receives over 160,000 Privacy Act requests each year requiring personnel to retrieve, review, and release these records to the Veteran. Prior to the D.AI solution, this process was highly manual, requiring significant labor resources that led to an ever-increasing backlog. AI is used by the contractor to create a preliminary set of recommended redactions that accelerates the review for VBA staff. | Since D.AI deployed, VA has processed over 25,000 cases through the system, using a smaller team of VA FTE to review and release these cases and reducing response time for lower complexity cases from over a month to less than 4 days. Overall, this solution increases the efficiency of CSD personnel, allowing them to release over a hundred cases a day instead of 3.5, reduces the cost to the government by allowing for eDelivery, and improving the Veteran experience by delivery of quicker and more accessible records. | The D.AI system retrieves documents based on the request type, as defined by VA, and then completes a preliminary processing review based on defined business rules for redaction. The request is then finalized by VA before release to the Veteran. | The D.AI system retrieves documents based on the request type, as defined by VA, and then completes a preliminary processing review based on defined business rules for redaction. The request is then finalized by VA before release to the Veteran. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-1471 | Articulate 360: AI Assistant | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | The ability to efficiently and quickly create interactive and engaging eLearning for VBA claims processors. Engaging eLearning helps reduce instructor burden and time in traditional classroom settings, which increases productivity. Efficient training development also helps reduce instructional design burden, especially amidst decreased hiring abilities. | The Articulate AI Assistant helps build more engaging eLearning courses in a timely manner. The AI assistant can help generate text to voice audio, create images, create outlines, create polls/quizzes, etc. This enhances the end-user experience of training and increases efficiency in training development, which in turn provides cost savings related to the amount of time taken to develop training. | Text-to-voice audio, images, outlines, polls, quizzes, and sounds for training/eLearning. | Text-to-voice audio, images, outlines, polls, quizzes, and sounds for training/eLearning. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5163 | Customer Sentiment | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Proactively identify drops in customer service as perceived by the customer. | Data assists in the proactive identification of ways to change customer service that can positively impact customer experience with teams. | The AI system assigns a number to customer comments between 0 and 1 with 1 being very positive comments. | The AI system assigns a number to customer comments between 0 and 1 with 1 being very positive comments. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-733 | Auto Doc ID | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The purpose of the tool is to convert inbound images into a standard format that is required for downstream processing for auto document classification and OCR/Data Extraction. After all document processing is completed and we are preparing the final output the images are converted back into a searchable PDF. Auto Doc ID provides suggestions to operators to decrease turn around time and increase quality. | - Time savings and higher quality - Improved OCR/Data Extraction Accuracy - clearer, properly aligned images increase recognition success for auto classification of Document ID. - Standardization for Automation - converting all documents into TIF IV ensures reliable document classification and processing. - Reduced Manual Corrections - automated alignment, noise removal, and skew correction minimize human intervention. - Preservation of Original Document Integrity - annotations are applied without altering original text, supporting compliance and readability. • Overall, the impacts are limited to operational efficiency and data accuracy. Correct extraction reduces manual data entry for document classification. The AI has no role in determining eligibility, adjudicating claims, or making benefit decisions" | Suggested doc ID. It provides document type suggestions in the ImageSort application to guide operators. D3P does not perform content-based analysis but makes technical image-quality determinations, such as: - Detecting and correcting image orientation (auto-rotation) - Identifying and removing image noise (de-speckling) - Determining and correcting crooked images (de-skewing) - Detecting whether images are in color, grayscale, or black-and-white - Preparing images for annotations by adjusting size and margins without altering the original text - All inbound images are standardized into TIF IV format for consistent downstream processing. It does not interpret meaning, generate content, or make adjudicative decisions. | Suggested doc ID. It provides document type suggestions in the ImageSort application to guide operators. D3P does not perform content-based analysis but makes technical image-quality determinations, such as: - Detecting and correcting image orientation (auto-rotation) - Identifying and removing image noise (de-speckling) - Determining and correcting crooked images (de-skewing) - Detecting whether images are in color, grayscale, or black-and-white - Preparing images for annotations by adjusting size and margins without altering the original text - All inbound images are standardized into TIF IV format for consistent downstream processing. It does not interpret meaning, generate content, or make adjudicative decisions. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-954 | Medallia Software as a Service (SaaS) - VSignals and ESignals | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Ability to take concrete action to address the concerns and pain points of Veterans by case-managing and undertaking service recovery on input/commentary provided by Veterans about their VA experiences in VSignals free text comments. | Systems redesign improvements by VA administrations based on actional input from Veterans, family members, caregivers, and survivors. | Veteran comments that are routed to the Veterans Crisis Line, National Center for Homeless Veterans, or the Patient Advocate Tracking System for VHA case management and service recovery. | Veteran comments that are routed to the Veterans Crisis Line, National Center for Homeless Veterans, or the Patient Advocate Tracking System for VHA case management and service recovery. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1114 | Volpara Imaging Software | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Generates volumetric measurements of breast x-ray scans. | Accelerate volumetric measurements of breast x-ray scans. | From the density maps, various quantitative density-map based statistics are computed as follows: • Volume of Fibroglandular Tissue in cm3 • Volume of Breast in cm3 • Volumetric Breast Density (the percentage of fibroglandular tissue in breast) • Average thickness of dense tissue in cm • Maximum thickness of dense tissue in cm • Maximum volume of dense tissue above any 1 cm2 square region (and location) • Image quality assurance metrics From the volumetric breast density, a BI-RADS 4th Edition and 5th Edition breast density category can be attained by applying thresholds set by the software. The device outputs those metrics along with the density maps themselves, marked with the location of the various maxima. Volpara Imaging Software 1.5.6 operates on a Windows or Linux server that meets Volpara data input and output requirements and generally is located outside the patient environment. The device does not contact the patient, nor does it control any life-sustaining devices. | From the density maps, various quantitative density-map based statistics are computed as follows: • Volume of Fibroglandular Tissue in cm3 • Volume of Breast in cm3 • Volumetric Breast Density (the percentage of fibroglandular tissue in breast) • Average thickness of dense tissue in cm • Maximum thickness of dense tissue in cm • Maximum volume of dense tissue above any 1 cm2 square region (and location) • Image quality assurance metrics From the volumetric breast density, a BI-RADS 4th Edition and 5th Edition breast density category can be attained by applying thresholds set by the software. The device outputs those metrics along with the density maps themselves, marked with the location of the various maxima. Volpara Imaging Software 1.5.6 operates on a Windows or Linux server that meets Volpara data input and output requirements and generally is located outside the patient environment. The device does not contact the patient, nor does it control any life-sustaining devices. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1323 | UiPath Document Understanding | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | This tool has the ability to intelligentially pull data from different types of documents and transfer that to other applications so that a human does not have to. It makes this type of input be more accurate and can be run 24/7/365 if needed leaving the staff to help with forms/documents that cannot be interpreted by the automated system. | This tool helps with the allowing the staff to focus on higher decision making processes instead of data input. It increases the efficiency of a process by not introducing errors. It can be a cost avoidance tool when the process it is part of incurs extra fees for this type of work. | Outputs of Document Understanding provides the information from the form to be electronically input into other applications via the Intelligent Automation Process. | Outputs of Document Understanding provides the information from the form to be electronically input into other applications via the Intelligent Automation Process. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-200 | Axon Body Camera and DMS - OSSO | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | AI is used to aid in Redaction of recorded Video footage and recorded Audio footage. Additionally, AI is used via transcribing recorded videos. | AI assisted redaction cuts down on the manpower costs to speed up the process of redacting recorded footage. The redaction protects individual's privacy. Additionally, AI is used in transcribing recorded audio this removes the requirements of a FTE to complete this process. | To have better outputs in searching the systems data repository for case files, evidence locker, and DMS; to have a better result when doing the redaction of evidence, files, videos and photos; and on the Fleet system only to have better results in identifying licenses plates in the licenses plate recognition part of the system. | To have better outputs in searching the systems data repository for case files, evidence locker, and DMS; to have a better result when doing the redaction of evidence, files, videos and photos; and on the Fleet system only to have better results in identifying licenses plates in the licenses plate recognition part of the system. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-204 | GE Logiq E10 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Accelerate ultrasound workflow with AI assistant tools. | Reduce ultrasound procedure times. | Changes workflow settings and detects kidney dimensions. | Changes workflow settings and detects kidney dimensions. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2348 | ReflexAI | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Crisis Responders gain more practice. | Saves thousands of hours of man power and provides consistent supportive feedback. | Call simulations produce a scoring summary highlighting strengths and areas of growth for each individual interaction. | Call simulations produce a scoring summary highlighting strengths and areas of growth for each individual interaction. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2426 | App feedback model for NLP tasks | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | The objective is to utilize Natural Language Processing (NLP) with comment reviews for Appfeedback, specifically to identify named entities (NER), profanity, and stop words, and provide an automated approach to pre-processing and cleaning text for downstream analytics tools. The system uses spaCy's en_core_web_sm model, an open-source software and model, to analyze text reviews from various sources, including external sources (Google and Apple stores) and internal sources (FeedbackUI and VA Mobile). The model provides an out-of-the-box approach for NLP tasks, including NER, profanity detection, and stop word removal. The data sources include: Text reviews from the VA's mobile applications on the Google and Apple stores (external sources) Reviews from FeedbackUI (internal source, available in the OIA_MobileHealth database) Reviews from VA Mobile (internal source, available via CSV files on the Mobile VA's internal website) Users: The users of this system are likely the OCC Data Science Team, who are responsible for developing and maintaining the pipeline. Target Audience: The target audience mobile application developers and other internal stakeholders who may be interested in analyzing and understanding the sentiment and feedback from these reviews. Problem to be solved: Consolidating feedback provided by patients and providers for VA Mobile Apps. Simple aggregation of data that provides a dashboard for clients to view and monitor trends. | Decrease working hours in manual review of feedback. | Identified issues shown within Power BI dashboard. | Identified issues shown within Power BI dashboard. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2623 | Verathon BladderScan Prime PLUS System | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Accelerate bladder measurements. | Accelerate bladder measurement workflows. | Bladder volume, directional aiming with real-time feedback, battery status and usage rate indicators are displayed on the LCD display. | Bladder volume, directional aiming with real-time feedback, battery status and usage rate indicators are displayed on the LCD display. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2672 | App Feedback categorization model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Consolidating feedback provided by patients and providers for VA Mobile Apps. Simple aggregation of data that provides a dashboard for clients to view and monitor trends. | Reducing hours spent manually reviewing app feedback. | Classification of app feedback in Power BI dashboard. | Classification of app feedback in Power BI dashboard. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2713 | LINQ II Insertable Cardiac Monitor | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | The device records cardiac information in response to automatically detected arrhythmias and patient-initiated activation or markings. The device is designed to automatically record the occurrence of an episode of arrhythmia in a patient. | Improve detection and recording of arrythmias. | The device records cardiac information in response to automatically detected arrhythmias and patient-initiated activation or markings. | The device records cardiac information in response to automatically detected arrhythmias and patient-initiated activation or markings. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3045 | VA Supply Chain Knowledge Management Dashboard | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Identify indictments of the VA’s enterprise supply chain in OIG, GAO and other public reports | Informed decision making to enhance the VA enterprise supply chain | Library of VA supply chain indictments in public reports; summary and entirety of reports are available for review | Library of VA supply chain indictments in public reports; summary and entirety of reports are available for review | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3164 | Human-Centered Design (HCD) User Feedback Summary, Analysis, and Design | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | The Human-Centered Design (HCD) team uses Large Language Models (LLMs) and custom agents to enhance user experience (UX) research. The AI system streamlines tasks such as analyzing user feedback, generating user stories, summarizing workshop outcomes, and refining user personas. The primary purpose is to improve the accessibility and clarity of user feedback, providing a better understanding of the needs of veterans and healthcare providers. The main problem to be solved is to maximize insights with massive amounts of app feedback to drive change. | Faster insights, less manual labor for staff, and improved efficiency. | Power BI dashboard consolidating app feedback across Mobile App (VA resource), Feedback UI (VA resource), Google, and Apple. | Power BI dashboard consolidating app feedback across Mobile App (VA resource), Feedback UI (VA resource), Google, and Apple. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3254 | Vivid E80/ Vivid E90/ Vivid E95 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Vivid E80 / Vivid E90 / Vivid E95 is a Track 3, diagnostic ultrasound system for use by qualified and trained healthcare professionals. It is primarily intended for cardiac imaging and analysis but also includes vascular and general radiology applications. It is a full featured diagnostic ultrasound system that provides digital acquisition, processing, analysis and display capabilities. | Improve ultrasound workflows. | Measurements of areas of interest. | Measurements of areas of interest. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3373 | Avicenna CINA | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and communicating suspected positive findings of (1) head CT images for Intracranial Hemorrhage (ICH) and (2) CT angiographies of the head for large vessel occlusion (LVO). | Accelerate detection of ICH and LVO. | The user is presented with notifications on cases with suspected ICH or LVO findings. Notifications include compressed preview images - these are meant for informational purposes only and are not intended for diagnostic use beyond notification. The device does not alter the original medical image, and it is not intended to be used as a diagnostic device. | The user is presented with notifications on cases with suspected ICH or LVO findings. Notifications include compressed preview images - these are meant for informational purposes only and are not intended for diagnostic use beyond notification. The device does not alter the original medical image, and it is not intended to be used as a diagnostic device. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3418 | CLARUS | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | The CLARUS 700 angiography imaging aids in the visualization of the vascular structures of the retina and the choroid. | With a single capture, CLARUS 700 produces a 90º high definition widefield image. Widefield images are automatically merged to achieve a 135º ultra-widefield of view. The CLARUS 700 makes use of a deep learning algorithm for Optic Nerve Head (ONH) detection. The ultra-widefield montage on CLARUS 700 is no longer dependent just on the patient accurately fixating their gaze on the internal fixation. With the ONH detection, the software will find the optic nerve and determine based on the image(s) captured where the patient was gazing at the point of capture. The CLARUS 700 device allows clinicians to easily review and compare high-quality images captured during a single exam while providing annotation and caliper measurement tools that allow in-depth analysis of eye health. CLARUS 700 is designed to optimize each patient’s experience by providing a simple head and chin rest that allows the patient to maintain a stable, neutral position while the operator brings the optics to the patient, facilitating a more comfortable imaging experience. The ability to swivel the device between the right and left eye helps technicians capture an image without realigning the patient. Live Infrared (IR) Preview allows the technician to confirm image quality and screen for lid and lash obstructions, prior to imaging, ensuring fewer image recaptures. | Automated image merging. | Automated image merging. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3738 | BTSSS 3542 Review | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Power Automate AI model that was trained on ZZpatients filled out on VA form 3542, and scans the PDF's to rename them for travel staff. Decrease the time that FTE was spent on renaming files and creating a dashboard to search and sort the files filled out by patients. | Reducing FTE time on process, increase 10 day metric for BTSSS submissions | Microsoft Power Platform AI modeling. | Microsoft Power Platform AI modeling. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4148 | Dragon Medical One | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Solution to decrease the provider administrative burdens of typing medical notes. | Enhancing patient outcomes by increasing provider-patient time, decreasing provider burnout, increasing efficiency in documentation, and therefore improving patient access due to providers having more availability to see patients. | DMO is an AI-powered speech recognition solution that utilizes advanced AI algorithms to deliver accurate, real-time medical transcription and drive voice-activated documentation workflows. | DMO is an AI-powered speech recognition solution that utilizes advanced AI algorithms to deliver accurate, real-time medical transcription and drive voice-activated documentation workflows. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4193 | Deep-Learning approaches to develop candidate lists of terms for use in text searches | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | The intended use is to increase the comprehensiveness of term lists to be used in text-matching approaches to identifying concepts of interest in free-text medical record notes. Text-matching approaches are an efficient method for identifying text of interest in medical notes. The success of text-matching approaches depends on the quality of term lists used to search for a concept of interest. While humans can generate word lists for text-matching approaches, the generated word lists are likely to be limited and biased by the vocabulary, language habits, population exposure, and local dialects of the human generating the list. Deep learning methods can help to augment candidate term lists to help overcome some of these challenges. Once generated these augmented candidate terms are reviewed by subject matter experts for appropriateness and accuracy of the suggested terms for the concept and use case of interest. | Here we (1) query pre-trained large language models (e.g., ChatGPT or Llama) through a user interface that allows prompting (such as the OpenAI API or CodeLlama) and/or (2) query word and phrase “embedding” models (e.g., Word2Vec and Phrase2Vec) to expand candidate term lists for review by subject matter experts. Our team has found both approaches effective for expediting the generation of more comprehensive term to concept mappings for use in text-matching algorithms that increase the accuracy of clinical concept extraction. We expect this use of AI to improve initial brainstorming of terms related to a given concept, resulting in a more sensitive approach to finding information of interest in free-text clinical notes. Experience to date has found that this approach enriches term sets in ways that increase sensitivity for identification of text of interest in clinical TIU notes when used in our CLEVER natural language processing pipeline. | The outputs of this use of AI is a candidate list of terms related to a concept of interest. This candidate list is combined with suggestions generated by human experts and then these are reviewed by a subject matter expert to generate a final list of terms for use in text-matching algorithms to extract medical record free text mentions of interest. For the pre-trained Large Language model approach, the input is a query prompting for synonyms of terms related to the concept of interest. The output is the answer provided by the LLM, including the suggested candidate terms and the original terms that were stated in the query. For the word and phrase embedding models, the input is an initial list of terms that are relevant to the concept of interest and the output are the suggested candidate terms, ranked by their statistical similarity to the original terms, based on the transformation of a clinical corpus into vector space or an “embedding model”. Regardless of the type of model used, all suggested candidate terms are reviewed by subject matter experts to ensure quality term lists are used in all text searches. | The outputs of this use of AI is a candidate list of terms related to a concept of interest. This candidate list is combined with suggestions generated by human experts and then these are reviewed by a subject matter expert to generate a final list of terms for use in text-matching algorithms to extract medical record free text mentions of interest. For the pre-trained Large Language model approach, the input is a query prompting for synonyms of terms related to the concept of interest. The output is the answer provided by the LLM, including the suggested candidate terms and the original terms that were stated in the query. For the word and phrase embedding models, the input is an initial list of terms that are relevant to the concept of interest and the output are the suggested candidate terms, ranked by their statistical similarity to the original terms, based on the transformation of a clinical corpus into vector space or an “embedding model”. Regardless of the type of model used, all suggested candidate terms are reviewed by subject matter experts to ensure quality term lists are used in all text searches. | Yes | https://github.com/suzytamang/clever-rockies | ||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4472 | Identify access to genetic testing in Veterans with breast cancer. | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Try to identify which Veteran with breast cancer has or does not have access to germline genetic testing from unstructured medical notes. | Providers are more aware of if a Veteran has access to germline genetic testing, this helps improve the quality of care VA delivers, since it is recommended to have germline genetic testing for everyone with breast cancer. | If the patient has access to germline genetic testing. | If the patient has access to germline genetic testing. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5550 | Cardiac CT Function Software Application | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | It is designed to support physicians in the visualization, evaluation, and analysis of the heart function through the calculation of parameters, such as volume and mass. The device is intended to be used as an aid to the existing standard of care and does not replace existing software applications that physicians use. | Automate volume and mass calculations to support physicians in the visualization, evaluation, and analysis of the heart function. | Volume and mass estimates. | Volume and mass estimates. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5628 | Purchase Order Filing | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Computer Vision: AI that processes and interprets visual data (e.g., images and videos). | Filing purchase orders and documents related to them is time consuming for pharmacy staff and the paper files take up floor space. | Having the AI file the documents online saves staff time and makes it easier to find the documents we need in the event of an audit. | The AI creates a folder on Sharepoint for each purchase order and files the documents related to that purchase order in the folder. There is 1 folder for each purchase order and the purchase order number is the folder name. | The AI creates a folder on Sharepoint for each purchase order and files the documents related to that purchase order in the folder. There is 1 folder for each purchase order and the purchase order number is the folder name. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5665 | VA CART Myocardial Ischemia NLP | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Extracts myocardial ischemia information in VistA using Natural Language Processing (NLP) of Radiology report. | Helps providers to care for patients | Ischemia identified | Ischemia identified | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5706 | VA CART Ejection Fraction NLP | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | The model is available to facilitate risk stratification at the point of care. It extracts ejection fraction (EF) information in VistA using Natural Language Processing (NLP) of Echocardiogram TIU report text. | This use case will enable a more informed personalized discussion with patients | Identifies Ejection Fraction | Identifies Ejection Fraction | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-913 | Limited Use of Azure Speech Services in PETALS Platform | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Azure speech services allow the Interactive Voice Response (IVR) system to detect if the IVR should leave a voicemail or continue to a voice prompt to engage the end user. | Azure speech services allows the optimization of reducing burden on the veteran and enhancing research and outreach efforts to engage the Veteran. | Detection of live person or answering machine. | Detection of live person or answering machine. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-2291 | Hospital Acquired Infections | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Automate the identification of hospital acquired infections to meet VA reporting requirements to reduce the reporting burden on health care provider staff. We are creating a generative AI pipeline that will read through structured and unstructured data to identify the patients with central lines who got an infection during an in-patient stay at a VA facility. There is no fine-tuning with this system. The patients identified are graded by humans and compared to known cases to assess accuracy. | Increased efficiency in healthcare outcomes | PowerBI Reports summarize hospital acquired infections. | PowerBI Reports summarize hospital acquired infections. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3043 | Use of Large Language Models to improve classification of identified text snippets from the CLEVER natural language processing pipeline | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing: AI that processes, interprets, and shares information in human language. | Use of text matching approaches like CLEVER to identify clinical information of interest in free-text clinical notes is an efficient approach but can be over-inclusive (e.g. including terms when used with an alternative meaning, for example, a search for “arms” may pick up mentions of forearms and firearms). A variety of approaches can be used to further filter identified text snippets to those more likely to be of interest for a given use case. Here we use large language model queries to classify text snippets as being relevant to a concept of interest. | Improved classification of text snippets from clinical notes will increase clinician and management efficiency in use of decision support to summarize and review information extracted from free-text clinical notes. For example, improved filtering of text snippets for clinical risk factors of interest will reduce clinician time in obtaining clinically needed information about risk factors of interest by filtering out text snippets that are unlikely to be meaningful to their immediate clinical decision. In short, the clinician will have fewer candidate text snippets to read through to get the information they need. Likewise, summary views of identified possible concept mentions will be more accurate, reducing noise in views to support strategic decisions. | This AI use case adds additional LLM-derived classification labels to text snippets from free-text clinical notes identified by the CLEVER NLP pipeline. These labels indicate the LLMs classification of the text snippet as being related to a concept of interest. For example, in our first deployed use of this method, an LLM model was used to improve classification of text snippets identified as including the term “xylazine” in the note text. The LLM classified identified text snippets into one of three labels: (1) "Other" (OTH) for snippets without evidence of suspected xylazine exposure, (2) "Suspected-Positive" (SUS-P) for snippets with positively asserted evidence of suspected exposure, and (3) "Suspected-Negative" (SUS-N) for snippets where the suspected exposure was negated. The OTH category included, for example, the many cases in which providers educated patients about the presence of xylazine in the illicit drug supply. The LLM-derived classification labels were used to narrow down text snippets for display as mentions of “possible xylaxine exposure” on the STORM decision support display (to help clinicians find information of relevance to opioid risk management across VA medical records) and in maps of possible xylazine exposures for strategic planning. In both cases, only those text snippets labeled as “SUS-P” were included on the displays. | This AI use case adds additional LLM-derived classification labels to text snippets from free-text clinical notes identified by the CLEVER NLP pipeline. These labels indicate the LLMs classification of the text snippet as being related to a concept of interest. For example, in our first deployed use of this method, an LLM model was used to improve classification of text snippets identified as including the term “xylazine” in the note text. The LLM classified identified text snippets into one of three labels: (1) "Other" (OTH) for snippets without evidence of suspected xylazine exposure, (2) "Suspected-Positive" (SUS-P) for snippets with positively asserted evidence of suspected exposure, and (3) "Suspected-Negative" (SUS-N) for snippets where the suspected exposure was negated. The OTH category included, for example, the many cases in which providers educated patients about the presence of xylazine in the illicit drug supply. The LLM-derived classification labels were used to narrow down text snippets for display as mentions of “possible xylaxine exposure” on the STORM decision support display (to help clinicians find information of relevance to opioid risk management across VA medical records) and in maps of possible xylazine exposures for strategic planning. In both cases, only those text snippets labeled as “SUS-P” were included on the displays. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-364 | Executive Spend Analysis | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | AI is used to enrich Integrated Funds Control, Accounting, and Procurement (IFCAP) purchase order data by categorizing line item purchases into pre-defined spend categories that are more analytically useful. It offers insights into spending patterns and helps executives monitor and manage procurement expenditures effectively. | Enhanced visibility into spending patterns and trends which allow VISN- and VHA-level leadership to make informed and strategic decisions about areas to drive spending efficiencies. | A comprehensive overview of procurement spending, with dynamic filtering options to dissect spending across various categories and geographic regions: * Overview and detailed tabs with filtering options for time periods and geographic areas. * Display of total spend, total line items, and breakdown by processing types. * Filtering and sorting features by budget object code, fund control point, and vendor. * Additional visualization tools, including spend categorization and trend analysis. AI outputs are used to inform more complete and unique analysis of spend data. These outputs do not directly trigger any decisions automatically. | A comprehensive overview of procurement spending, with dynamic filtering options to dissect spending across various categories and geographic regions: * Overview and detailed tabs with filtering options for time periods and geographic areas. * Display of total spend, total line items, and breakdown by processing types. * Filtering and sorting features by budget object code, fund control point, and vendor. * Additional visualization tools, including spend categorization and trend analysis. AI outputs are used to inform more complete and unique analysis of spend data. These outputs do not directly trigger any decisions automatically. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-3891 | Scriptpro Ticket Report | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Agentic AI: AI systems that perform tasks or make decisions autonomously with minimal human intervention. | Tabulating the scriptpro report is time consuming. It is a report they use to track pharmacy traffic volume and it was previously tabulated manually. They now use AI to automate for efficiency. | Having the AI tabulate the report saves staff time | Power automate AI builder reads a report from scriptpro with information on our pharmacy tickets and tabulates the results in Sharepoint List. It is a report they use to track pharmacy traffic volume. | Power automate AI builder reads a report from scriptpro with information on our pharmacy tickets and tabulates the results in Sharepoint List. It is a report they use to track pharmacy traffic volume. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-4 | HTM112 Tutor | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | While current VA and Healthcare Technology Management (HTM) courses offer tailored content, the HTM112 Tutor elevates the learning experience by providing individualized and personal support that adapts to each learner's unique pace, style and specific knowledge gaps. It serves as a learning enhancement tool, not a replacement for instruction, by increasing accessibility to information and fostering an individualized learning journey. This ultimately boosts course efficiency and overall training effectiveness. | The HTM112 AI Tutor delivers measurable benefits across training effectiveness, operational efficiency, and patient safety outcomes: • Accelerated Learning: Reduces time spent searching course materials by over 80% • Personalized Support: Provides individualized assistance tailored to each learner • Improved Retention: Offers immediate clarification and reinforcement, Adds ability to create learning aids to improve recall and retention back on the job • 24/7 Accessibility: Ensures continuous learning support • Instructor Resource Optimization: Frees instructors from repetitive questions • Reduced Training Administration: Streamlines content delivery and reduces manual support requests • Scalable Knowledge Management: Provides consistent, verified information access across HTM staff • Cost-Effective Continuous Education: Reduces need for refresher training • Improved Technical Competency: Better-trained HTM professionals enhance the reliability and safety • Faster Issue Resolution: On-demand access to networking knowledge reduces equipment downtime and improves patient care continuity • Standardized Best Practices: Ensures consistent application of networking procedures • Risk Mitigation: Reduces potential for networking-related equipment failures • Knowledge Preservation: Maintains institutional knowledge for future HTM professionals • Continuous Professional Development: Supports career advancement • Enhanced Workforce Readiness: Ensures HTM staff remain current with VA standards | The HTM112 Tutor is built on the Summit AI Assistant (SAA) platform, utilizing RAG technology. All content has been curated, cleansed, and verified for accuracy to ensure factual grounding, transparency, and access to current, domain-specific information across all generated outputs: Conversational Responses: • Direct, accurate answers to networking and HTM-related questions with source attribution • "I'm not sure" responses when queries fall outside the verified knowledge base or course scope • Contextual explanations tailored to the user's specific learning needs • Relevant excerpts from course transcripts, presentations, and lab materials • Precise citations with page numbers, section references, and source document identification • Cross-references to related course materials and resources • Clear explanations of complex networking concepts with practical examples • Comprehensive topic and lesson summaries • Interactive learning aids including quizzes, knowledge checks, and flashcards • Step-by-step procedural guides and troubleshooting checklists • Personalized study recommendations based on individual learning gaps • Targeted guidance for completing hands-on exercises • Troubleshooting support for lab-related technical issues • Progress-based hints and explanations to facilitate independent learning All outputs maintain consistency with official VHA HTM training standards and include transparent source attribution to support verification and further study. | The HTM112 Tutor is built on the Summit AI Assistant (SAA) platform, utilizing RAG technology. All content has been curated, cleansed, and verified for accuracy to ensure factual grounding, transparency, and access to current, domain-specific information across all generated outputs: Conversational Responses: • Direct, accurate answers to networking and HTM-related questions with source attribution • "I'm not sure" responses when queries fall outside the verified knowledge base or course scope • Contextual explanations tailored to the user's specific learning needs • Relevant excerpts from course transcripts, presentations, and lab materials • Precise citations with page numbers, section references, and source document identification • Cross-references to related course materials and resources • Clear explanations of complex networking concepts with practical examples • Comprehensive topic and lesson summaries • Interactive learning aids including quizzes, knowledge checks, and flashcards • Step-by-step procedural guides and troubleshooting checklists • Personalized study recommendations based on individual learning gaps • Targeted guidance for completing hands-on exercises • Troubleshooting support for lab-related technical issues • Progress-based hints and explanations to facilitate independent learning All outputs maintain consistency with official VHA HTM training standards and include transparent source attribution to support verification and further study. | Yes | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-5322 | ValidateNow | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Generative AI: AI that generates new or synthetic content (e.g., images, videos, audio, text, code). | AI is used to help inventory managers and purchasing agents locate the right products from the Medical/Surgical Prime Vendor (MSPV) catalog that can be substituted for recent Government Purchase Card (GPC) purchases. It enables efficient management of expenses and ensures purchase correctness by surfacing untriaged items and comparing them to MSPV contracted items. | Reduced GPC spend, increased MSPV utilization, and overall increased compliance with Executive Order mandates to curb GPC spend. | AI outputs are text-based and intended to provide justification and assistance to inventory managers and purchasing agents making the decision about replacing GPC purchases with MSPV items. These outputs include: * Untriaged and triaged tabs with item details and match confidence levels. * In-depth views with GPC level purchase data and MSPV catalog matching. * Options to mark items for replacements or CPRC approval. * Tracking and action statuses such as archived, ready for procurement, or completed. | AI outputs are text-based and intended to provide justification and assistance to inventory managers and purchasing agents making the decision about replacing GPC purchases with MSPV items. These outputs include: * Untriaged and triaged tabs with item details and match confidence levels. * In-depth views with GPC level purchase data and MSPV catalog matching. * Options to mark items for replacements or CPRC approval. * Tracking and action statuses such as archived, ready for procurement, or completed. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-7221 | PII in Foreign Medical Program (FMP) Faxes | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Government Benefits Processing | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning: Models trained on data to make predictions or classifications based on identified patterns or relationships. | Improve processing speed in identifying and processing multiple language claim forms and improve the speed and accuracy of such claims. | Improves processing speed to ensure the Veteran receives a completed claim in a more timely manner. AI is more cost effective than Optical Character Recognition (OCR). | 1. Identify claims submitted in a foreign language. 2. Identify which foreign language is being used. 3. Identify the specific claims pages in a foreign language within the document. | 1. Identify claims submitted in a foreign language. 2. Identify which foreign language is being used. 3. Identify the specific claims pages in a foreign language within the document. | No | |||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2459 | XtractOne | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-544 | Appointment Comments Categorization | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-118 | SafePointe WDS - OSSO | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3824 | electronic Virtual Assistant (e-VA) | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1475 | Activity recognition using wearable sensors for use in closed loop deep brain stimulation systems | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1565 | AgileMD eCART Clinical Deterioration Model | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-216 | Smart AI Bot Assistant | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2336 | PsychCorpCenter | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2377 | Prostate Cancer, Genetic Risk, and Equitable Screening Study (ProGRESS) - Prostate Cancer Risk Prediction Model | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2500 | Podimetrics | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2869 | ICU CIS/ARK PDF Medical Entity and Clinical Scoring Extraction (PHI 3.5 LLM) | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3033 | Multi-Modal Digital Image Exchange - AI (MDIE-AI) | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3320 | Partially Observed Markov Decision Process for Post-Traumatic Stress Disorder | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3369 | ECG System Software | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3488 | Clinical Key | Elsevier | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3492 | ACS NSQIP Risk Score | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3496 | Avicenna LVO | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3500 | Ysio Max | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3623 | X100HT with Slide Loader with Full Field Peripheral Blood Smear (PBS) Application | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3664 | WRDensity by Whiterabbit.ai | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3746 | Withings Scan Monitor 2.0 | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3787 | WellDoc BlueStar | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3828 | WAVE Clinical Platform | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3869 | VX1, VX1+ | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3910 | VUNO Med-DeepBrain | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3992 | Volta AF-Xplorer | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4066 | Biotronik Home Monitoring System | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4435 | Post-Discharge 30-day Readmission or Death Prediction | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4964 | Intelligent 2D | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5013 | Medical Imaging Auto-segmentation | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5050 | VSTOne – AI-Driven Patient Monitoring and Care Delivery | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5173 | Implementation of a Severe COVID-19 Risk Prediction Tool | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5181 | CVI42 | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5333 | Aquilion ONE with AiCE | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-540 | ECG/EKG Machines- Interpretation of Results | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5620 | Pact Act Co-Pay Exemption Prediction Tool | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-573 | GE AMX Portable X-ray Machines | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5874 | Computer Vision Framework (CVF) | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5919 | aPROMISE X | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-6030 | Dental - Dentsply Sirona CERE | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-659 | TeraRecon | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3004 | Investigative, Analytics, and Reporting Capabilities | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2795 | Machine Algorithm for Report Surveillance (MARS) | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1159 | Identity Governance and Administration (IGA) | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-171 | Google Cloud Platform - CCAI / Dialogflow | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-380 | VEO Virtual Analyst Proof of Concept | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-503 | ScienceLogic Artificial Intelligence Operations (AIOPS) Software Subscriptions with Maintenance and Professional Support Services. | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-52 | VA Chat Copilot Meta Pilot | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4316 | 2268 Next Generation (2268NextGen) | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2266 | National Training Team | Schools NLP FAQ Dashboard | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3578 | Synthetic Data Creation | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3619 | Billie GPT | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4603 | VEO Insights Engine | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4644 | VX Insights Hub Agent | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-1561 | VCA & PPMS Chatbot | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2422 | AI for Classifying Safety Events | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2463 | AI Monitoring FEHR Harms | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2504 | Thematic Analysis Using BERT Modeling | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2508 | My HealtheVet VSignals Main Improvement Summary | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-2582 | AI Health Coach | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3119 | Lyssn for Mental Health | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3324 | Microsoft Power Automate and AI | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-3574 | BlackBox Code research | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4099 | Synthetic Data Generation: Experimenting with OpenSource, Third Party and GenAI | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4390 | Vocational Rehabilition Services (VRS) Employment Support | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-4562 | Adobe Creative Cloud | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5009 | Actions for Application Vulnerabilities | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5214 | Discharge Predictive Model | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5218 | Optimizing Renin Angiotensin System Blocker Use for Kidney Disease | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-5825 | Rythm Express | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-24-93 | CareCentra Next Level Personalized AI Health Coach | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Veterans Affairs | VA-25-856 | AI-Enhanced Call Center | d) Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Election Assistance Commission | Clearinghouse Division | EAC-1 | AI Answers in Clearinghouse Network | Pilot | Service Delivery | Pilot | c) Not high-impact | Not high-impact | Generative AI; Natural Language Processing (NLP) | AI Answers is a chatbot integrated into the Civic Roundtable platform that allows election officials to query AI about discussions and resources within the Clearinghouse Network. It provides a natural language method for navigating community content without searching the broader internet. | This tool improves access to EAC-provided resources, reduces barriers to finding relevant content, and enables election officials to quickly locate discussions, documents, and guidance within the Clearinghouse Network. | Natural-language answers, summaries of discussions, and links to on-platform documents/resources. | 08/01/2025 | Purchased from a vendor | Civic Roundtable | Natural-language answers, summaries of discussions, and links to on-platform documents/resources. | General descriptions only; AI Answers uses on-platform content provided by Civic Roundtable and participating users. It does not train on EAC data, external data, or internet sources. | Yes (incidental PII may be involved through authenticated platform users) | None of the above | No | Yes | No | Direct usability testing and vendor-led focus groups | |||||||||||
| Election Assistance Commission | All EAC divisions where staff elect to use approved tools | EAC-2 | Internal Generative AI Tools for Staff Productivity | Pilot | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI; Natural Language Processing (NLP) | These tools allow EAC staff to draft, edit, summarize, and rewrite documents; brainstorm ideas; and support plain-language improvements while adhering to the EAC’s AI Use Policy. The EAC uses widely available commercial AI capabilities embedded in productivity and communication tools, such as Microsoft 365 Copilot features for drafting and editing, AI-enabled meeting summarization and captioning, translation features, and document refinement. These tools provide general-purpose assistance to staff and do not make or influence decisions about individuals or rights. | Improved staff productivity, clearer written communication, faster drafting workflows, and enhanced research and summarization support. | Text summaries, rewritten content, draft emails, recommended wording, brainstorming output. | 09/01/2025 | Purchased from a vendor | OpenAI; Anthropic; Google; xAI; Perplexity; Microsoft | Yes | Text summaries, rewritten content, draft emails, recommended wording, brainstorming output. | General-purpose data entered by staff for productivity tasks; no PII or sensitive content is allowed per the EAC's AI Use Policy. | No | None of the above | No | No | No | General solicitations of feedback and comments from EAC staff | ||||||||||
| Election Assistance Commission | Grants Division | EAC-3 | Grants Lifecycle Application System | Deployed | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI; Natural Language Processing (NLP) | This tool is built into the Grants Division's grants management system, and it allows EAC staff to review high level trends among grant applications. | Improved staff productivity, faster data anaylisis, and enhanced research and summarization support. | Generative text responses and data analytics. | 08/01/2025 | Purchased from a vendor | Groundswell | Generative text responses and data analytics. | General-purpose data entered by staff for productivity tasks; no PII or sensitive content is allowed per the EAC's AI Use Policy. | No | None of the above | No | General solicitations of feedback and comments from EAC staff | |||||||||||||
| Election Assistance Commission | All EAC Divisions | EAC-4 | Routing emails from shared inboxes | Pilot | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI; Natural Language Processing (NLP) | This EAC uses a single inbox for the majority of its public communications. This tool is intended to route those incoming messages to the appropriate individuals or inboxes immediately to decrease response time and improve custormer engagement. | Improved staff productivity, faster drafting workflows, and enhanced customer service. | Emails forwarded to the appropriate individual or inbox. | 01/01/2026 | Purchased from a vendor | OpenAI; Anthropic; Google; xAI; Perplexity; Microsoft | Emails forwarded to the appropriate individual or inbox. | General-purpose data entered by staff for productivity tasks; no PII or sensitive content is allowed per the EAC's AI Use Policy. | No | None of the above | No | General solicitations of feedback and comments from EAC staff | |||||||||||||
| Equal Employment Opportunity Commission | Clearinghouse Division | EAC-1 | AI Answers in Clearinghouse Network | Pilot | Service Delivery | Pilot | c) Not high-impact | Not high-impact | Generative AI; Natural Language Processing (NLP) | AI Answers is a chatbot integrated into the Civic Roundtable platform that allows election officials to query AI about discussions and resources within the Clearinghouse Network. It provides a natural language method for navigating community content without searching the broader internet. | This tool improves access to EAC-provided resources, reduces barriers to finding relevant content, and enables election officials to quickly locate discussions, documents, and guidance within the Clearinghouse Network. | Natural-language answers, summaries of discussions, and links to on-platform documents/resources. | 08/01/2025 | Purchased from a vendor | Civic Roundtable | Natural-language answers, summaries of discussions, and links to on-platform documents/resources. | General descriptions only; AI Answers uses on-platform content provided by Civic Roundtable and participating users. It does not train on EAC data, external data, or internet sources. | Yes (incidental PII may be involved through authenticated platform users) | None of the above | No | Yes | No | Direct usability testing and vendor-led focus groups | |||||||||||
| Equal Employment Opportunity Commission | All EAC divisions where staff elect to use approved tools | EAC-2 | Internal Generative AI Tools for Staff Productivity | Pilot | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI; Natural Language Processing (NLP) | These tools allow EAC staff to draft, edit, summarize, and rewrite documents; brainstorm ideas; and support plain-language improvements while adhering to the EAC’s AI Use Policy. The EAC uses widely available commercial AI capabilities embedded in productivity and communication tools, such as Microsoft 365 Copilot features for drafting and editing, AI-enabled meeting summarization and captioning, translation features, and document refinement. These tools provide general-purpose assistance to staff and do not make or influence decisions about individuals or rights. | Improved staff productivity, clearer written communication, faster drafting workflows, and enhanced research and summarization support. | Text summaries, rewritten content, draft emails, recommended wording, brainstorming output. | 09/01/2025 | Purchased from a vendor | OpenAI; Anthropic; Google; xAI; Perplexity; Microsoft | Yes | Text summaries, rewritten content, draft emails, recommended wording, brainstorming output. | General-purpose data entered by staff for productivity tasks; no PII or sensitive content is allowed per the EAC's AI Use Policy. | No | None of the above | No | No | No | General solicitations of feedback and comments from EAC staff | ||||||||||
| Equal Employment Opportunity Commission | Grants Division | EAC-3 | Grants Lifecycle Application System | Deployed | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI; Natural Language Processing (NLP) | This tool is built into the Grants Division's grants management system, and it allows EAC staff to review high level trends among grant applications. | Improved staff productivity, faster data anaylisis, and enhanced research and summarization support. | Generative text responses and data analytics. | 08/01/2025 | Purchased from a vendor | Groundswell | Generative text responses and data analytics. | General-purpose data entered by staff for productivity tasks; no PII or sensitive content is allowed per the EAC's AI Use Policy. | No | None of the above | No | General solicitations of feedback and comments from EAC staff | |||||||||||||
| Equal Employment Opportunity Commission | All EAC Divisions | EAC-4 | Routing emails from shared inboxes | Pilot | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI; Natural Language Processing (NLP) | This EAC uses a single inbox for the majority of its public communications. This tool is intended to route those incoming messages to the appropriate individuals or inboxes immediately to decrease response time and improve custormer engagement. | Improved staff productivity, faster drafting workflows, and enhanced customer service. | Emails forwarded to the appropriate individual or inbox. | 01/01/2026 | Purchased from a vendor | OpenAI; Anthropic; Google; xAI; Perplexity; Microsoft | Emails forwarded to the appropriate individual or inbox. | General-purpose data entered by staff for productivity tasks; no PII or sensitive content is allowed per the EAC's AI Use Policy. | No | None of the above | No | General solicitations of feedback and comments from EAC staff | |||||||||||||
| Federal Deposit Insurance Corporation | Chief Information Officer Organization | FDIC 3 | Plain Language Policy Assistant | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Federal Deposit Insurance Corporation | Chief Information Officer Organization | FDIC 4 | Knowledge Article Generation | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Increase in self-service available answers and a reduction in issue resolution times through improved quantity and quality of information available to provide answers to previously solved problems. | Decreased effort to produce and validate knowledge articles, which will result in an increase in self-service available answers and a reduction in issue resolution times through improved quantity and quality of information available to provide answers to previously solved problems. | A document containing steps and conclusions to IT issues published via a web based native format. | A document containing steps and conclusions to IT issues published via a web based native format. | |||||||||||||||||||||
| Federal Deposit Insurance Corporation | Corporate University | FDIC - 9 | Immersive Learning | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This training tool will allow FDIC users to practice what they have been trained on. This tool provides an interactive learning method for participants to engage in low-stakes practice. Users can apply different outcomes to determine best methods. | Immersive learning offers an engaging approach to training programs by creating realistic environments using GenAI where learners can practice and refine their skills. A controlled environment wherein learners interact with immersive technology to solve pre-determined complex situations and problems. This approach amplifies employee engagement and retention, performance management, learning and development, administrative tasks, workforce planning, employee onboarding, recruiting, interviewing as well as others. | Prediction based on sequencing provided by developer and completions reports with objectives met. | Prediction based on sequencing provided by developer and completions reports with objectives met. | |||||||||||||||||||||
| Federal Deposit Insurance Corporation | Corporate University | FDIC 11 | Student Monitoring | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Federal Deposit Insurance Corporation | Corporate University | FDIC 12 | Plagiarism Detection | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Federal Deposit Insurance Corporation | Corporate University | FDIC 13 | Student Outcome Projection | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Federal Deposit Insurance Corporation | Corporate University | FDIC 14 | Facilitating Status | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Depositor and Consumer Protection | FDIC 15 | AI Assisted Data Collection | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Computer Vision | The current process to extract structured data from bank provided PDF documents of consumer loan applications takes FDIC personnel a significant amount of time to manually transcribe data from PDF loan applications, leading to delays, high error rates, and increased costs. The AI solution is intended to automate data extraction, reducing transcription errors and enabling faster, more accurate collection of loan application data. | The FDIC will more effectively and efficiently utilize its examination resources by reducing manual components of data collection, reducing data transcription error rates and increasing speed of data collection. | The output is a data file of extracted data (csv, text file, json file) from consumer credit reports required to conduct examination analytics for compliance and fair lending reviews. | b) Developed in-house | Yes | The output is a data file of extracted data (csv, text file, json file) from consumer credit reports required to conduct examination analytics for compliance and fair lending reviews. | Consumer loan applications and credit reports from banks for supervised learning and creation of bank specific data collection models | Yes | https://www.fdic.gov/policies/privacy/documents/pia-fdic-ets.pdf | k) None of the above | Yes | https://www.fdic.gov/policies/privacy/documents/pia-fdic-ets.pdf | |||||||||||||
| Federal Deposit Insurance Corporation | Division of Depositor and Consumer Protection | FDIC 16 | AI Assisted Time Management | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Depositor and Consumer Protection | FDIC 17 | Compliance Risk Monitoring | a) Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Examiners must analyze a large amount of data to identify regulatory areas with relatively higher risk of violations on a data driven objective basis. This process is time consuming and does not currently utilize ML, which can reduce time intensity and improve effectiveness of risk detection. | FDIC will more effectively and efficiently utilize its examination resources on financial institutions and regulatory areas where consumer harm risk is highest. | A report that identifies relative risk of compliance violations. | A report that identifies relative risk of compliance violations. | |||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Depositor and Consumer Protection | FDIC 19 | Pre-Examination Planning Monitoring | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Computer Vision | The pre-exam planning process is time intensive and FDIC has limited examiner resources to review pre exam planning materials and make examination scoping decisions | This project mitigates time constraints and limited examiner resources by enhancing efficiency of pre-exam planning by identifying PEP questions that are not indicative / misaligned with risk, thereby reducing examiner burden and yielding time savings for pre exam planning | A report related to pre-exam questions on their alignment with risk identification. | b) Developed in-house | Yes | A report related to pre-exam questions on their alignment with risk identification. | PEP IR (Pre-Examination Planning Instrument Report) uses pre-exam planning documents such as ARCH (Assessment of Risk of Consumer Harm), Call Report data, to identify pre exam planning questions for further review. | No | k) None of the above | Yes | |||||||||||||||
| Federal Deposit Insurance Corporation | Division of Depositor and Consumer Protection | FDIC 21 | HMDA (Home Mortgage Disclosure Act) Outlier Screen | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Scoping fair lending examinations can be time intensive. | The HMDA (Home Mortgage Disclosure Act) program pre-screens off-site mortgage data to allow better utilization of examiner time on the highest risk areas and provides a data driven, objective basis to determining risk identification. | DCP Exams receives a report identifying areas of heightened risk of consumer harm. | b) Developed in-house | Yes | DCP Exams receives a report identifying areas of heightened risk of consumer harm. | Regulatory Home Mortgage Disclosure Act (non-public) data is analyzed with supervised learning using SAS 9.4. | Yes | https://www.fdic.gov/policies/privacy/documents/fdic-framework-for-oversight-of-compliance-and-cra-activities-user-suite-pia.pdf | Race/Ethnicity, Sex/Gender, Age | Yes | https://www.fdic.gov/policies/privacy/documents/fdic-framework-for-oversight-of-compliance-and-cra-activities-user-suite-pia.pdf | |||||||||||||
| Federal Deposit Insurance Corporation | Division of Depositor and Consumer Protection | FDIC-22 | FDIC Deposit Insurance Misrepresentation | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Other | AI can help the FDIC identify instances of misrepresentation of FDIC Deposit Insurance. | Natural language processing can search media reports and postings to identify posts with heightened risk of misrepresentation for additional review. | A database of media reports and postings with high risk of misrepresentation to be further reviewed manually. | a) Purchased from a vendor | Meltwater | No | A database of media reports and postings with high risk of misrepresentation to be further reviewed manually. | Yes | https://www.fdic.gov/policies/privacy/documents/fdic-social-media-pia.pdf | k) None of the above | Yes | https://www.fdic.gov/policies/privacy/documents/fdic-social-media-pia.pdf | |||||||||||||
| Federal Deposit Insurance Corporation | Division of Insurance and Research | FDIC-24 | Financial Well-being Project | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Administration | FDIC - 25 | Generative Artificial Intelligence (AI) for Legal Research | a) Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Reduce the time it takes to find legal research information, so approved FDIC employees can answer the legal sensitive questions faster and easier. | Thomson Reuters plans to deliver generative AI capabilities in Westlaw. This use case will allow users with the ability to find the answers they need faster and easier. This directly supports our mission by providing users with enhanced research capabilities through generative AI. This directly supports our mission by providing users with enhanced research capabilities through generative AI. | Commercially available legal research tools (Westlaw, Lexis Plus, and Bloomberg Law) will provide users with generative AI capabilities such as (best case law, review of a legal brief, etc.) to give attorney's information faster. | Commercially available legal research tools (Westlaw, Lexis Plus, and Bloomberg Law) will provide users with generative AI capabilities such as (best case law, review of a legal brief, etc.) to give attorney's information faster. | |||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Administration | FDIC 32 | Automatically-Scored Writing Assessment (AWA) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Reduce the time it takes to review the writing assessment and cost of essay scoring for the purpose of making more timely employment decisions. | The Automatically-Scored Writing Assessment (AWA) is intended to significantly increase speed and reduce cost of essay scoring for the purpose of employment decisions. | A single overall score for the essay on a scale from 1 to 5, as well as three subscale scores ranging from 1 to 5 across the following dimensions: grammar and mechanics, analysis and reasoning, as well as organization and structure. | c) Developed with both contracting and in-house resources | Personnel Decisions Research Institutes, LLC (PDRI) | No | A single overall score for the essay on a scale from 1 to 5, as well as three subscale scores ranging from 1 to 5 across the following dimensions: grammar and mechanics, analysis and reasoning, as well as organization and structure. | Not Applicable | Documentation has been partially completed: Some documentation exists (detailing the composition and any statistical bias or measurement skew for training and evaluation purposes), but documentation took place within this use cases development. | Yes | Not Applicable | k) None of the above | No | Not Applicable | Not Applicable | ||||||||||
| Federal Deposit Insurance Corporation | Division of Finance | FDIC 34 | FDIC Predictive Budget Analysis | a) Pre-deployment The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Take in budget data for reporting and prediction. Analyze and explain differences in budget and actuals data. The AI solution is intended to decrease time to produce results via automation and reduce risk of human error. It helps solve the problem of timeliness of predictive efforts. | Reduces time and work to produce the report; increases response time to react and review data; allows deeper dives. This helps protect the DIF. | The output would be a written explanation and visual of existing gaps between budget and actuals spent, in addition to visualizations of future looking analytic trends for budget considerations. The outputs would be viewed by users on a dashboard. | The output would be a written explanation and visual of existing gaps between budget and actuals spent, in addition to visualizations of future looking analytic trends for budget considerations. The outputs would be viewed by users on a dashboard. | |||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Finance | FDIC 36 | Goals and Workload Assumptions | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Summarize divisional efforts into a narrative taking into consideration staff numbers and competencies. The AI solution is intended to increase employee awareness of efforts and provide focus for what specialty a given employee may be interested in. This helps resolve and prevent internal communication problems. | Reduced time to receive answers based on data. This internal solution improves operational efficiency of the agency. | A visualization of effectiveness of divisional projects and a recommendation on how to fill the gap on missing elements or competencies. This would display via various charts on reports plus a narrative for consideration. | A visualization of effectiveness of divisional projects and a recommendation on how to fill the gap on missing elements or competencies. This would display via various charts on reports plus a narrative for consideration. | |||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Finance | FDIC 37 | CFOO Admin Chatbot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Compiling data from internal FDIC website and targeted documents to answer questions about CFOO admin. The AI solution is intended to increase user access to already available data in a more digestible form. This helps internal employees self solve problems quickly when engaging with administrative issues. | Makes information more readily available , searchable and readable. This internal solution improves operational efficiency of the agency. | An answer to the users question the user asks the bot. This addresses topics on internal division administrative questions including office maintenance, training, HR questions, policy, and security. | b) Developed in-house | Yes | An answer to the users question the user asks the bot. This addresses topics on internal division administrative questions including office maintenance, training, HR questions, policy, and security. | CFOO Admin Chatbot uses the FDIC CFOO Administrative data set for training. The data contains administrative data elements. The administrative data set is internally available for all the FDIC employees and contractors. | No | k) None of the above | Yes | https://github.com/RasaHQ/rasa | ||||||||||||||
| Federal Deposit Insurance Corporation | Division of Finance | FDIC 38 | CFOO Travel Chatbot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Compiling data from internal FDIC website and targeted documents to answer questions about FDIC travel. The AI solution is intended to increase user access to already available data in a more digestible form. This helps internal employees self solve problems quickly when engaging with travel issues. | Makes information more readily available, searchable and readable. This internal solution improves operational efficiency of the agency. | An answer to the users question the user asks the bot. This addresses internal agency questions on travel including directives, policy, training, limits, and alerts. | b) Developed in-house | Yes | An answer to the users question the user asks the bot. This addresses internal agency questions on travel including directives, policy, training, limits, and alerts. | CFOO Travel Chatbot uses the FDIC Travel Policy data set for training. The data contains travel policy data elements. The travel policy data set is internally available for all the FDIC employees and contractors. | No | k) None of the above | Yes | https://github.com/RasaHQ/rasa | ||||||||||||||
| Federal Deposit Insurance Corporation | Division of Finance | FDIC 39 | CFOO Transactional Data Analysis | a) Pre-deployment The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | AI assistance in monitoring financials/invoices for proper submittal, duplicate payments, and other abnormalities. The AI solution is intended to be an automated assist for auditing and monitoring efforts. This helps ensure compliance and reduces corporate risk. | Reduced time to catch errors, perform reviews, make data visualizations available. This helps protect the agency from the financial errors listed. | A recommendation on when a gap is identified, why it was identified, and possible causes. This would be displayed via viz tables showing the flagged invoices or abnormalities coupled with charts showing the general status. This would include drilldowns to contract numbers and agency sections to identify areas of concern. | A recommendation on when a gap is identified, why it was identified, and possible causes. This would be displayed via viz tables showing the flagged invoices or abnormalities coupled with charts showing the general status. This would include drilldowns to contract numbers and agency sections to identify areas of concern. | |||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Finance | FDIC 40 | FDIC Staffing Model | a) Pre-deployment The use case is in a development or acquisition status. | Human Resources | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Monitor retention rate, retirement planning, and knowledge retention and succession. The AI solution is intended to be an automated assist for auditing and monitoring efforts. This helps reduce corporate risk and works toward solving problems of succession management. | Reduced time to receive answers, allowing folks to focus on more critical mission-driven work. | A prediction on future staffing and recommendations on how to meet varied goal points. This would display charts of before and after based on dates and narratives to be included in reports for consideration. This would include drilldowns to agency sections to identify areas of concern. | A prediction on future staffing and recommendations on how to meet varied goal points. This would display charts of before and after based on dates and narratives to be included in reports for consideration. This would include drilldowns to agency sections to identify areas of concern. | |||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Finance | FDIC 41 | FDIC Financial Audit Report | a) Pre-deployment The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Monitor employee transactions ensure regulations and policies are being followed. The AI solution is intended to be an automated assist for auditing and monitoring efforts. This helps ensure compliance and reduces corporate risk. | Reduced time to receive answers, allowing folks to focus on more critical mission-driven work. This helps the agency meet mandated reporting requirements. | An identification of items that fall outside policy and a recommendation on if a trend needs to be addressed. This would be displayed via viz tables showing the flagged transactions coupled with charts showing the general status. This would include drilldowns to directive numbers and agency sections to identify areas of concern. | An identification of items that fall outside policy and a recommendation on if a trend needs to be addressed. This would be displayed via viz tables showing the flagged transactions coupled with charts showing the general status. This would include drilldowns to directive numbers and agency sections to identify areas of concern. | |||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Finance | FDIC 42 | Risk Identification and Response | a) Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Apply a risk model to identified risks and incorporate additional data sets to identify any correlations or possible solutions. The AI solution is intended to reduce time and pain points of processing or reading through large amounts of disparate datasets or narratives. | Reduce risk to corporation by being able to more quickly and accurately analyze and report out on data. | A recommendation of any correlations among risk data and a report including possible solutions. This would be displayed via viz tables showing the high risks or abnormalities coupled with charts showing the general status. This would include drilldowns to risk profile categories and agency sections to identify areas of concern. | A recommendation of any correlations among risk data and a report including possible solutions. This would be displayed via viz tables showing the high risks or abnormalities coupled with charts showing the general status. This would include drilldowns to risk profile categories and agency sections to identify areas of concern. | |||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Finance | FDIC 44 | FDIC Investment Portfolio Management | a) Pre-deployment The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Analyze investment history and provide an additional predictive analysis for investments. The AI solution is intended to decrease time to produce results via automation to reduce risk of human error. | Reduced time to receive answers, allowing folks to focus on more critical mission-driven work, act as an additional investment analysis review for staff to compare with. This helps protect the DIF. | A prediction of the outcome of possible investment choices and a recommendation of which would do best for the current portfolio. The output for this would be similar to a portfolio dashboard with a focus on internally flagged watch items. | A prediction of the outcome of possible investment choices and a recommendation of which would do best for the current portfolio. The output for this would be similar to a portfolio dashboard with a focus on internally flagged watch items. | |||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Resolutions and Receiverships | FDIC 45 | Analytics on Loan Data | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Resolutions and Receiverships | FDIC 46 | Analytics on Securities Data | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Resolutions and Receiverships | FDIC 47 | Business Data Extraction from Loan Files | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Resolutions and Receiverships | FDIC 48 | Data Extraction from Security Prospectus | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Resolutions and Receiverships | FDIC 50 | Initial Deposit File Analysis | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Resolutions and Receiverships | FDIC 51 | Owned Real Estate (ORE) Property Appraisal | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Resolutions and Receiverships | FDIC 52 | Trust Document Entity Extraction | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Currently, claims specialists spend countless hours manually reviewing trust agreements to extract grantor/settlor and beneficiary information. Implementing OCR and Llama to automate key data extraction from these agreements will significantly make the process faster, more efficient, and more accurate while eliminating the potential for manual errors. | This will reduce the time and cost to transpose information from Trust Agreements. | The extracted data, which will include key data elements, such as grantor/settler and beneficiaries from Trust Agreements, along with the original Trust Agreement will be presented to users through an application user interface. | c) Developed with both contracting and in-house resources | Deloitte | Yes | The extracted data, which will include key data elements, such as grantor/settler and beneficiaries from Trust Agreements, along with the original Trust Agreement will be presented to users through an application user interface. | In-house Trust Documents | Yes | https://www.fdic.gov/policies/privacy/documents/fdic-insurance-determinations-and-payouts-pia.pdf | k) None of the above | Yes | https://www.fdic.gov/policies/privacy/documents/fdic-insurance-determinations-and-payouts-pia.pdf | ||||||||||||
| Federal Deposit Insurance Corporation | Division of Resolutions and Receiverships | FDIC 53 | Trust Document Entity Extraction - LLM | a) Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The current trust document review process is labor intensive and may require further reviews by legal experts, hence, prolonging the processing time. By leveraging the use of a Large Language Model to automate the extraction of key information from trust agreements to match and verify customer form input, this will lead to a faster, accurate and efficient workflow for claims specialists. | The current trust document review process is labor intensive and may require further reviews by legal experts, hence, prolonging the processing time. Using a LLM to automate the extraction of data will reduce manual effort and improve the quality of data extracted from trust documents. | By using the LLM, the extracted data will include key data elements, such as grantor/settler and beneficiaries from Trust Agreements along with the original Trust Agreement, which will be presented to users through an application user interface. This will automate the extraction of key information from trust agreements to match and verify customer form inputs. | By using the LLM, the extracted data will include key data elements, such as grantor/settler and beneficiaries from Trust Agreements along with the original Trust Agreement, which will be presented to users through an application user interface. This will automate the extraction of key information from trust agreements to match and verify customer form inputs. | |||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Risk Management Supervision | FDIC 55 | Extracting IT Information | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Searching for information using AI rather than manual reviews of pdf documents or using the RADD search function. AI simplifies the search efforts with a more robust search engine and provide better potential hits of desired information much faster than the current methods. | Algorithm, Language Processing Horizontal Analysis - Risk Examination (AlphaREx) analyzes unstructured narrative content in Information Technology (IT) examination workpapers and Reports of Examination (ROE) to provide meaningful, actionable insights into information. AlphaREx capabilities contribute to the FDICs Modernization initiative for the Supervision Modernization Business Drivers and supports the mission of the RMS Information Technology Section. Additionally, AlphaREx was developed to enable the automation, ingestion, analysis, and visualization of the information technology and operational unstructured and structured data gathered per the supervised financial institution. | Information collected across the IT Profile, FDIC Review of Examinations (ROEs), and multiple other sources (both unstructured/structured data) is a critical business need and is used by FDIC to promptly identify and address IT risks and Cyber risks for the FDIC Insured Deposit Institutions (IDIs). In addition, this information is used to effectively identify trends, patterns and areas of improvement for the InTREx program; including training and changes to the FDIC IT Examination processes. | b) Developed in-house | Yes | Information collected across the IT Profile, FDIC Review of Examinations (ROEs), and multiple other sources (both unstructured/structured data) is a critical business need and is used by FDIC to promptly identify and address IT risks and Cyber risks for the FDIC Insured Deposit Institutions (IDIs). In addition, this information is used to effectively identify trends, patterns and areas of improvement for the InTREx program; including training and changes to the FDIC IT Examination processes. | Examination documents | Yes | https://www.fdic.gov/policies/privacy/documents/pia-fdic-ets.pdf | k) None of the above | Yes | various code libraries to include the following: beautifulsoup4 chardet cx_Oracle dask datefinder==0.7.1 docx2txt flask gensim==4.0.0 GitPython lxml matplotlib >= 3.7 nltk==3.6.7 numpy >= 1.24 pandas= 11.0 pyodbc python-dateutil python3-saml rapidfuzz scikit-learn= 1.10 sentence-transformers >= 2.2 # snappy==1.1.10 # Spacy 3.0 is incompatible with AlphaREx spacy_setup.py. nlp.add_pipe now takes the string name of the registered component factory, not a callable component. # Spacy 2.3.7 runs with python 3.9 without crashing, but yields different pipeline results. spacy==2.2.4 statsmodels tabulate torch # Additional Packages used during software development autoflake black==22.3.0 build bump2version coverage flake8 isort==5.11.5 jupyter locust >= 2.19 mypy pip pre-commit pylint pytest pytest-cov pyupgrade ruff # Packages installed with pip when using conda detect-secrets docx2python flask-paginate gunicorn line_profiler qdrant-client sphinx sphinx-rtd-theme transformers>=4.3 | https://www.fdic.gov/policies/privacy/documents/pia-fdic-ets.pdf | ||||||||||||
| Federal Deposit Insurance Corporation | Division of Risk Management Supervision | FDIC 56 | Transcribing Structured Interviews | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Risk Management Supervision | FDIC 57 | New Offsite Models | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This machine learning (ML) model is intended to identify banks most likely to be downgraded at or before the next examination. | An improved methodology of identifying potential downgrades for use in our quarterly offsite review program. This provides information on potentially deteriorating financial conditions in the better rated banks across the country. | Identify banks more likely to be downgraded. | b) Developed in-house | Yes | Identify banks more likely to be downgraded. | Call Report data, bank ratings, and macroeconomic data | No | k) None of the above | Yes | |||||||||||||||
| Federal Deposit Insurance Corporation | Chief Information Officer Organization | FDIC-58 | Enterprise FDIC Chat | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Human capital resources are used on reading and summarizing content, research, generating draft documents, generating briefings, and searching for agency information. | Improvements in speed to skillset and business productivity for employees by introducing a secure, agency compliant Chat that can provide AI-assisted Q&A, content summarization, and basic semantic search. | Generate first drafts of documents, briefings, or communication materials. Create visual representation of data sets for reports or presentations. Search for agency information using knowledge retrieval system. | Generate first drafts of documents, briefings, or communication materials. Create visual representation of data sets for reports or presentations. Search for agency information using knowledge retrieval system. | |||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Administration and Office of Inspector General | FDIC-60 | Criminal Background Investigation | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Generative AI | By automating the verification process for conducting background screenings, the tool helps FDIC make faster and more informed hiring and investigatory decisions. | Improving the time in completing background screening. Reduce inaccurate and/or incomplete information on individuals. Reduce the manual verification process, which can be cumbersome and place a heavy administration burden on HR staff. Protects FDIC insured institutions from high risk individuals. | Aggregation of public records used to evaluate risk. | 03/01/2026 | a) Purchased from a vendor | CLEAR | No | Aggregation of public records used to evaluate risk. | Not Applicable | Not Applicable | Yes | Not Applicable | l) Other | No | Not Applicable | Not Applicable | |||||||||
| Federal Deposit Insurance Corporation | Division of Administration | FDIC-61 | Financial Background Investigation | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | By automating the verification process for conducting background screenings, the tool helps FDIC make faster and more informed hiring and investigatory decisions. | Improving the time in completing background screening. Reduce inaccurate and/or incomplete information on individuals. Reduce the manual verification process, which can be cumbersome and place a heavy administration burden on HR staff. | System outputs only include credit information such as if payments are made on-time, any delinquencies, and/or liens or bankruptcies etc. An applicants FICO score is not part of the Financial Background Investigation process and is not that factored into the suitability, fitness, or eligibility determination. | a) Purchased from a vendor | Equifax, Trans Union | No | System outputs only include credit information such as if payments are made on-time, any delinquencies, and/or liens or bankruptcies etc. An applicants FICO score is not part of the Financial Background Investigation process and is not that factored into the suitability, fitness, or eligibility determination. | Not Applicable | Not Applicable | Yes | Not Applicable | Unknown | No | Not Applicable | Not Applicable | ||||||||||
| Federal Deposit Insurance Corporation | Division of Resolutions and Receiverships | FDIC-62 | Restitution Order AI Assistance | a) Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The current process for managing restitution order data lacks efficiency, making it difficult to effectively identify, track, classify, and resolve these orders. This hinders the FDIC's ability to maximize recoveries from failed financial institutions. Implementing a strategic solution to enhance the identification, tracking, classification, and resolution of restitution order data will improve efficiency and ultimately support the FDIC's mission to maximize recoveries from failed financial institutions. | This is a strategic solution to enhance the identification, tracking, classification, and resolution of restitution order workload, ultimately supporting the FDICs mission to maximize recoveries from failed financial institutions The outputs will include restitution order dataset, case resolution report, consolidated recovery statement, loss estimation and asset tracking metrics. | Restitution order data in structured format. | Restitution order data in structured format. | |||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Resolutions and Receiverships | FDIC-63 | Provisional Holds Model | a) Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This solution will enable the FDIC to proactively manage financial risk by providing the ability to estimate potential overpayments, even without real-time account-level data. The strategic insights gained will allow for the optimization of financial holds, minimizing both overpayment risk and operational inefficiencies. | The model supports the provisional holds process by estimating potential overpayment. Appropriate provisional holds allow depositors access to their funds in a timely manner, while controlling risks to the DIF from overpaying uninsured depositors during a bank failure. | This solution will enable FDIC to proactively manage financial risk by estimating potential overpayments under different levels of provisional holds in the absence of current account-level data and provide strategic insights needed to optimize our financial holds. | This solution will enable FDIC to proactively manage financial risk by estimating potential overpayments under different levels of provisional holds in the absence of current account-level data and provide strategic insights needed to optimize our financial holds. | |||||||||||||||||||||
| Federal Deposit Insurance Corporation | Division of Resolutions and Receiverships | FDIC-64 | Background Check Web Application | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | FDIC can only maintain contact details of individuals or entities if they are self reported by them. The CLEAR service provides potentially the most current contact and entity information for Individuals or Companies who may be holding monetary assets or bonds in failed institutions and may need to be reimbursed. | Expected benefits are the most current contact information for individuals/companies including any related subsidiary entities. | Web based output with contact details for individuals or entities. | a) Purchased from a vendor | CLEAR | Yes | Web based output with contact details for individuals or entities. | FDIC does not train the model. | No | Yes | No | k) None of the above | No | No open-source. CLEAR is a company proprietary licensed system. | No | ||||||||||
| Federal Deposit Insurance Corporation | Legal Division and Office of Inspector General | FDIC-65 | Blockchain and Cryptocurrency Transaction Identification and Visualization | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Tool will improve ability to identify and track the flow of transactions executed on the blockchain. | Making blockchain-based transactions more transparent and creating visualizations to facilitate understanding transaction flow, which supports identification of criminal activity by following the money through mixers, swaps and smart contracts. | The outputs demonstrate the transactions across multiple blockchains. | 17/01/2026 | a) Purchased from a vendor | Chainalysis | No | The outputs demonstrate the transactions across multiple blockchains. | FDIC does not train the model. | No | i) Income | No | Not open source. | ||||||||||||
| Federal Deposit Insurance Corporation | Legal Division | FDIC-66 | Location and Evaluation of Individuals and Assets for Collectability of Civil Claim | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Obtaining information regarding the location of individuals as is needed to pursue civil claims is a cumbersome process requiring many manual hours to search various resources available physically and electronically. | Access to a centralized online investigative database that aggregates public records expedites an evaluation of the personal assets of an individual and an assessment as the likely collectability of damages under a potential civil claim or of an aged, unpaid civil settlement or federal restitution orders. | The AI system's outputs provide information that can support a user ultimately recommending to pursue potential civil claims, | a) Purchased from a vendor | CLEAR | No | The AI system's outputs provide information that can support a user ultimately recommending to pursue potential civil claims, | FDIC does not train the model. | Yes | l) Other | No | Not open source. | |||||||||||||
| Federal Deposit Insurance Corporation | Division of Risk Management Supervision | FDIC-67 | Digital Asset Liquidity Risk Analysis | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Other | Chainalysis traces digital asset funds and provides identities/wallets/flows of digital assets among wallets on public blockchains. | We use Chainalysis for liquidity flow analysis (flow of digital asset funds) to inform policymaking efforts related to stablecoins and other digital assets, operational risk analysis, and analysis of concentration of digital asset activities and entities. | Identities/wallets/flows of digital assets among wallets on public blockchains. | a) Purchased from a vendor | Chainalysis | No | Identities/wallets/flows of digital assets among wallets on public blockchains. | Unknown | Not Applicable | No | Not Applicable | None of the above | No | Not Applicable | |||||||||||
| Federal Deposit Insurance Corporation | Office of Communications, Division of Complex Institution Supervision & Resolution, Legal Division, Division of Risk Management Supervision | FDIC-68 | Social Listening | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | There are many social media sites that provide information useful to the FDIC's fulfillment of its mission; searching each independently is time consuming and inefficient. Summary of results for social and news items on the FDIC, liquidity risks, banking industry, and other relevant topics helps users to support mission. | Access to a centralized online tool that searches numerous social media sites expedites analysis of publicly available information. Utilized for social listening about policy issues, potential liquidity risks, banks, and more to help support mission. | Outputs help to inform as to issues that may impact the FDIC's fulfillment of its mission. Outputs reflect the search results of prompts related to searches focused on mentioning the FDIC and trends affecting the ability of the FDIC to fulfill its mission. | a) Purchased from a vendor | Meltwater | No | Outputs help to inform as to issues that may impact the FDIC's fulfillment of its mission. Outputs reflect the search results of prompts related to searches focused on mentioning the FDIC and trends affecting the ability of the FDIC to fulfill its mission. | Public data, trained by vendor | Yes | https://www.fdic.gov/policies/privacy/documents/fdic-social-media-pia.pdf | k) None of the above | No | https://www.fdic.gov/policies/privacy/documents/fdic-social-media-pia.pdf | ||||||||||||
| Federal Deposit Insurance Corporation | Division of Risk Management Supervision | FDIC-71 | Background Check Tool for Fraud Investigations | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Other | CLEAR is used to assist with fraud investigations. It searches and retrieves publicly available data on individuals and businesses. | Rapidly obtaining information on individuals and businesses from publicly available data sources. | Background information on individuals and ownership information on businesses. | a) Purchased from a vendor | CLEAR | No | Background information on individuals and ownership information on businesses. | Unknown | Not Applicable | Yes | Not Applicable | None of the above | No | Not Applicable | |||||||||||
| Federal Deposit Insurance Corporation | Division of Administration | FDIC-72 | Data Extraction from Contract PDF Files | a) Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Reduce the time to extract specific information from electronic forms, saving time and reducing errors from this manual process. | The AI model will extract specific information (dates) from an electronic form. This will turn a manual process into an automated one saving worker time and increasing accuracy. | The output will be a spreadsheet that flags any contract actions that do not meet FDIC procurement standards- specifically wherein the contracting officer signs after the effective date. | The output will be a spreadsheet that flags any contract actions that do not meet FDIC procurement standards- specifically wherein the contracting officer signs after the effective date. | |||||||||||||||||||||
| Federal Deposit Insurance Corporation | Office of Legislative Affairs | FDIC - 83 | Corporate Impact Analysis Object | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The Office of Legislative Affairs is the Corporations congressional liaison that closely monitors and responds to legislation important to the FDIC. In doing so, the Office of Legislative Affairs is responsible for identifying legislation in a timely manner and working with the Legal Division to complete a legislative analysis summarizing the impact of the legislation relative to the FDIC, insured depository institutions, and consumers (at a minimum). The Corporate Impact Analysis Object use case will utilize previously generated legislative analysis for both recently introduced legislation in the last six years, as well as full public law analyses since 1993 to source draft impact statements of bills. The quicker turnaround of draft analyses will address the significant reduction in attorneys available to draft this work product. | Currently, the Office of Legislative Affairs (OLA) spends time reviewing legislative analyses generated by the Legal Division. The AI solution is intended to enhance this process by decreasing both the time necessary to compile an initial review, as well as allowing for more focus to be placed on the fundamental impacts and thereby leading to productivity gains. | The output of the system will generate a Highlight Overview of the legislation to include the name of the sponsor and bill title; an overall Summary of the bill; a section on the potential impact to FDIC; a section on the potential impact to financial institutions; and a section on the potential impact to Consumers." Outputs will be used to brief OLA, Legal Division, and Office of the Chairman (as well as other D/Os) on the impact the legislation will have on the Corporation, insured depository institutions, consumers, etc. | The output of the system will generate a Highlight Overview of the legislation to include the name of the sponsor and bill title; an overall Summary of the bill; a section on the potential impact to FDIC; a section on the potential impact to financial institutions; and a section on the potential impact to Consumers." Outputs will be used to brief OLA, Legal Division, and Office of the Chairman (as well as other D/Os) on the impact the legislation will have on the Corporation, insured depository institutions, consumers, etc. | |||||||||||||||||||||
| Federal Deposit Insurance Corporation | Office of Inspector General | FDIC - 89 | OIG computer forensics investigation tools. | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Access and store information from electronic devices in support of criminal investigations. | Assists in law enforcement investigations. | Electronic files. | 03/01/2026 | a) Purchased from a vendor | Adobe, Auto Split, Microsoft, Scan Writer, Social Discovery | No | Electronic files. | Not trained by FDIC. | Yes | TBD | c) Age | No | No | TBD | ||||||||||
| Federal Energy Regulatory Commission | FERC | FERC-0001 | Summarization & Policy Analysis for Regulatory Comments (originally Leverage AI in the Rulemaking Process Use Case) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | SPARC does not make decisions or take actions that directly affect individual rights, benefits, or services. It supports internal analysis of public comments and does not operate autonomously or outside human oversight. Its outputs are used by analysts to inform summaries and sentiment trends, not to drive determinations or enforcement. | Natural Language Processing (NLP) | Addresses duration, efficiencies, and accuracies in initial comment analysis. | Reduces the time required for comment analysis as part of mission processes. | 1.Groups public comments by topic and identifies the overall sentiment: positive, negative, or neutral toward each issue. 2.Provides interactive summaries and visualizations that help analysts quickly understand what commenters are saying and how many are engaged on each topic. 3.Enables analysts to ask questions about the comments and receive AI-generated answers, making it easier to explore large volumes of feedback efficiently. | 15/10/2025 | c) Developed with both contracting and in-house resources | Zvolvant (Small Business) | Yes | 1.Groups public comments by topic and identifies the overall sentiment: positive, negative, or neutral toward each issue. 2.Provides interactive summaries and visualizations that help analysts quickly understand what commenters are saying and how many are engaged on each topic. 3.Enables analysts to ask questions about the comments and receive AI-generated answers, making it easier to explore large volumes of feedback efficiently. | This uses publicly submitted comments to train and fine-tune its models, and evaluates performance based on how accurately the system summarizes topics, detects sentiment, and responds to analyst queries and acceptance. | No | k) None of the above | Yes | a) Yes | Impacts include improved efficiency in comment analysis, reduced manual workload for analysts, and enhanced transparency. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | a) Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | a) Direct usability testing | ||||
| Federal Energy Regulatory Commission | FERC | FERC-0002 | Improve Safety Inspections Use Case | a) Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | This use case is in pre-deployment and initial analysis determines that it will not make decisions or take actions that directly affect individual rights, benefits, or services. Reassessment will be conducted upon development. | Other | The AI solution is intended to enhance the efficiency and accuracy of dam and LNG pipeline structure inspections by FERCs Dam inspectors. It will analyze historical and real-time images, videos, and notes to identify potential safety defects and trends. This approach reduces the manual workload of safety inspectors and improve inspection timelines. | c) Developed with both contracting and in-house resources | Zvolvant (Small Business) | No | Images obtained by the AI will build a repository for training. | k) None of the above | Yes | Improved inspection accuracy and reduced manual review time were identified through pilot testing and analyst feedback. | c) Yes – by the CAIO | b) Not applicable | ||||||||||||||
| Federal Energy Regulatory Commission | FERC | FERC-0003 | Enhance Market Surveillance and Fraud Detection Use Case | a) Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | This use case is in pre-deployment and initial analysis determines that it will not make decisions or take actions that directly affect individual rights, benefits, or services. Reassessment will be conducted upon development. | Other | The AI solution aims to assist staff in analyzing large and complex market datasets for market integrity and compliance purposes. It will process and interpret vast amounts of trading data at high speed, identifying anomalies, trends, and potential regulatory violations. This capability will help maintain fair and transparent markets, enabling staff to focus on strategic decision-making. The AI’s efficiency and precision will save time, improve compliance, and uphold public trust in market operations. | c) Developed with both contracting and in-house resources | Zvolvant (Small Business) | No | We have not started work on this use case and have not identified specific data sets needed to support all market surveillance activities. | k) None of the above | Yes | c) Yes – by the CAIO | b) Not applicable | |||||||||||||||
| Federal Energy Regulatory Commission | FERC | FERC-0004 | Support Interconnection Request Responses Use Case | a) Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | This use case is in pre-deployment and initial analysis determines that it will not make decisions or take actions that directly affect individual rights, benefits, or services. Reassessment will be conducted upon development. | Other | The AI solution will be designed to address the growing backlog of project interconnection requests and reduce the average three-year wait time. By accelerating the processing of proposals, the AI will help FERC with cost savings and shorting the timelines of requests. | c) Developed with both contracting and in-house resources | Zvolvant (Small Business) | No | We have not started work on this use case and have not identified specific data sets needed to support all interconnection requests. | k) None of the above | Yes | c) Yes – by the CAIO | b) Not applicable | |||||||||||||||
| Federal Energy Regulatory Commission | FERC | FERC-0005 | Gas Blanket Certificates (Permitting) | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Energy & the Environment | Pilot | c) Not high-impact | Not high-impact | Blanket Certificate use case automate administrative tracking and reporting functions without making determinations that affect individual rights, benefits, or services. The system operates under human oversight and does not independently initiate regulatory actions or enforcement. Its outputs are used to support internal reporting and compliance reviews, not to drive decisions with legal or civil implications. | Generative AI | It is designed to streamline the classification, tracking, and summarization of filings and issuances related to blanket certificates, reducing manual workload and improving consistency across document types. | Improves operational efficiency by automating repetitive tasks, enhances transparency in regulatory processes, and accelerates access to structured data for both internal analysts and stakeholders from the analyst's responses. | The system generates categorized summaries of filings and issuances, flags sensitive content, and produces searchable metadata tags that support downstream reporting and compliance reviews. | 24/11/2025 | c) Developed with both contracting and in-house resources | Zvolvant (Small Business) | No | The system generates categorized summaries of filings and issuances, flags sensitive content, and produces searchable metadata tags that support downstream reporting and compliance reviews. | The system uses historical filings and issuances from FERC’s eLibrary and PIW systems, including scanned PDFs and structured metadata, to train and evaluate classification and summarization accuracy. | No | k) None of the above | Yes | a) Yes | Improves the speed, accuracy, and traceability of reviewing Annual Construction Reports, which support environmental compliance and regulatory oversight. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | a) Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | a) Direct usability testing | ||||
| Federal Energy Regulatory Commission | FERC | FERC-0006 | AI Enabled Assistant Legal Research | a) Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Addresses streamlining legal workflows by automating research, document analysis, and drafting. It addresses inefficiencies in legal operations and reduces the time required for complex legal tasks. | Enhances productivity, improves accuracy in legal analysis, and supports faster decision-making. It enables legal professionals to focus on high-value strategic work, improves client service, and reduces operational costs. | Outputs include legal research summaries, contract reviews, document annotations, and draft legal content. These are grounded in authoritative sources and subject, by policy, to human validation. | 30/07/2025 | a) Purchased from a vendor | Thomson Reuters | No | Outputs include legal research summaries, contract reviews, document annotations, and draft legal content. These are grounded in authoritative sources and subject, by policy, to human validation. | This is a subscription service from a third-party vendor and confidential business information of this nature is not available for review however, vendors have stated it is trained on publicly available legal data and proprietary content. | No | k) None of the above | No | |||||||||||||
| Federal Housing Finance Agency | OGC | FHFA-1 | LexisNexis | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | Generative AI | Generative AI-assisted legal research tool | Efficiency | LexisNexis uses generative AI to search for and summarize legal information, resulting in more efficient legal work for FHFA. | 10/01/2023 | a) Purchased from a vendor | LexisNexis | No | LexisNexis uses generative AI to search for and summarize legal information, resulting in more efficient legal work for FHFA. | This is a subscription service from a third-party vendor and confidential business information of this nature is not available. | No | k) None of the above | No | |||||||||||||||
| Federal Housing Finance Agency | DHMG | FHFA 2 | Neural Networks for FHFA Modeling Analytics Platform (FMAP) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Deployed | Help researchers optimize the framework and specifications of the production of the Single-Family FHFA Modeling Analytics Platform (FMAP). | This tool uses neural networks to identify nonlinearities, anomalies, and important variables in loan-level mortgage data, which helps FHFA staff improve forecasts from the Single-Family Modeling Analytics Platform. | This tool uses neural networks to identify nonlinearities, anomalies, and important variables in loan-level mortgage data, which helps FHFA staff improve forecasts from the Single-Family Modeling Analytics Platform. | ||||||||||||||||||||||||||
| Federal Housing Finance Agency | OCOO | FHFA 3 | Phishing Email Identification | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | Classical/Predictive Machine Learning | Knowbe4 PhishER is a tool to report and manage suspicious emails. This tool use ML and AI to analyze suspicious emails to identify if the email is spam, phishing, or malicious. The primary benefit of this tool is to automate responses without requiring human intervention to analyze each email that end users identify as suspicious. | Risk Reduction | Output is a response to suspicious emails without requiring a cybersecurity specialist to analyze hundreds of emails. This tool reduces time to respond to end user requests concerning suspicious activity. | 04/01/2021 | a) Purchased from a vendor | Knowbe4 Phish ER | Yes | Output is a response to suspicious emails without requiring a cybersecurity specialist to analyze hundreds of emails. This tool reduces time to respond to end user requests concerning suspicious activity. | This is a subscription service from a third-party vendor and confidential business information of this nature is not available. | No | k) None of the above | No | |||||||||||||||
| Federal Housing Finance Agency | OGC | FHFA 4 | WestLaw | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | Generative AI | Generative AI-assisted legal research tool | Efficiency | WestLaw uses generative AI to search for and summarize legal information, resulting in more efficient legal work for FHFA. | 07/01/2018 | a) Purchased from a vendor | WestLaw | No | WestLaw uses generative AI to search for and summarize legal information, resulting in more efficient legal work for FHFA. | This is a subscription service from a third-party vendor and confidential business information of this nature is not available. | No | k) None of the above | No | |||||||||||||||
| Federal Housing Finance Agency | DBR | FHFA 5 | Public Comments Summarization and Topic Classifications Pilot Project | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | |||||||||||||||||||||||||||||
| Federal Housing Finance Agency | OCFO | FHFA 6 | Virtual Acquisition Office (VAO) Ally | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | Generative AI | VAO Ally is part of the overall VAO subscription. VAO Ally reduces the amount of time necessary to research acquisition-related policy/guidance questions and scenarios by parsing through multiple resources and providing responses/answers based on a user’s question searched. | Efficiency | VAO Ally provides a summarized response/answer to the user’s question within the overall web-based VAO subscription. VAO Ally provides the source materials that supported its answer/output. | 07/01/2024 | a) Purchased from a vendor | VAO Ally | No | VAO Ally provides a summarized response/answer to the user’s question within the overall web-based VAO subscription. VAO Ally provides the source materials that supported its answer/output. | This is a subscription service from a third-party vendor and confidential business information of this nature is not available. | No | k) None of the above | No | |||||||||||||||
| Federal Housing Finance Agency | OCIO | FHFA 7 | Analytics,, Search, Queries and LLM integration for data management on Oracle | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | |||||||||||||||||||||||||||||
| Federal Housing Finance Agency | OCIO | FHFA 8 | Python modules to support AI/ML | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | |||||||||||||||||||||||||||||
| Federal Housing Finance Agency | OCIO | FHFA 9 | R modules to support AI/ML | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | |||||||||||||||||||||||||||||
| Federal Housing Finance Agency | OCIO | FHFA 10 | Security and network monitoring using Cisco Identify Services engine | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | Classical/Predictive Machine Learning | Improves security visibility, segmentation, and threat response by analyzing network traffic and user behavior in real-time, enabling automatic detection, isolation, and resolution of potential security risks. | Risk Reduction | Cisco Identity Services Engine (ISE) uses AI for network visibility, segmentation, and threat response by analyzing network traffic and user behavior. | 07/01/2024 | a) Purchased from a vendor | Cisco | Yes | Cisco Identity Services Engine (ISE) uses AI for network visibility, segmentation, and threat response by analyzing network traffic and user behavior. | This is a subscription service from a third-party vendor and confidential business information of this nature is not available. | No | k) None of the above | No | |||||||||||||||
| Federal Housing Finance Agency | OCIO | FHFA 11 | Desktop productivity using Microsoft 365 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | Generative AI | To help foster smart recommendations, data insights, and enhanced collaboration. | Efficiency | Microsoft 365 suite (Teams, SharePoint, Excel, PowerPoint, Word) uses AI for various functionalities like language translation, smart recommendations, data insights, and enhanced collaboration. | 03/01/2023 | a) Purchased from a vendor | Microsoft | Yes | Microsoft 365 suite (Teams, SharePoint, Excel, PowerPoint, Word) uses AI for various functionalities like language translation, smart recommendations, data insights, and enhanced collaboration. | This is a subscription service from a third-party vendor and confidential business information of this nature is not available. | No | https://www.fhfa.gov/document/fhfa-infrastructure-general-support-system-pia-11-20-2023.pdf | k) None of the above | No | https://www.fhfa.gov/document/fhfa-infrastructure-general-support-system-pia-11-20-2023.pdf | |||||||||||||
| Federal Housing Finance Agency | OCIO | FHFA 12 | Virtual Desktop using Citrix | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | Classical/Predictive Machine Learning | Citrix uses AI to optimize virtual desktops, manage workloads, and enhance security with behavioral analysis and threat detection. | Cost Savings, Efficiency, and Risk Reduction | Citrix uses AI for optimizing virtual desktop performance, managing workloads, and enhancing security through behavioral analysis and threat detection. | 12/01/2024 | a) Purchased from a vendor | Citrix | Yes | Citrix uses AI for optimizing virtual desktop performance, managing workloads, and enhancing security through behavioral analysis and threat detection. | This is a subscription service from a third-party vendor and confidential business information of this nature is not available. | No | k) None of the above | No | |||||||||||||||
| Federal Housing Finance Agency | OCIO | FHFA 13 | Analytics, Indexing, and Anomaly Detection for Data Management on SQL Server | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | Classical/Predictive Machine Learning | Predictive analytics, automated indexing, and anomaly detection in data management and queries. | Efficiency and Risk Reduction | SQL Server incorporates AI for predictive analytics, automated indexing, and anomaly detection in data management and queries. | 11/01/2021 | a) Purchased from a vendor | Microsoft | Yes | SQL Server incorporates AI for predictive analytics, automated indexing, and anomaly detection in data management and queries. | This is a subscription service from a third-party vendor and confidential business information of this nature is not available. | No | k) None of the above | No | |||||||||||||||
| Federal Housing Finance Agency | OCIO | FHFA 14 | Workflow automation and predictive analytics using ServiceNow | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | Classical/Predictive Machine Learning | Help to automate workflows, predictive analytics, and providing intelligent service management solutions. | Efficiency | ServiceNow utilizes AI for automating workflows, predictive analytics, and providing intelligent service management solutions. | 03/01/2023 | a) Purchased from a vendor | ServiceNow | Yes | ServiceNow utilizes AI for automating workflows, predictive analytics, and providing intelligent service management solutions. | This is a subscription service from a third-party vendor and confidential business information of this nature is not available. | No | https://www.fhfa.gov/document/fhfa-infrastructure-general-support-system-pia-11-20-2023.pdf | k) None of the above | No | https://www.fhfa.gov/document/fhfa-infrastructure-general-support-system-pia-11-20-2023.pdf | |||||||||||||
| Federal Housing Finance Agency | OCIO | FHFA 15 | Security and Compliance in file sharing and collaboration using Kiteworks | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | Classical/Predictive Machine Learning | Enhances security and compliance in file sharing and collaboration by detecting anomalies and potential threats in real-time." | Risk Reduction | Kiteworks employs AI to enhance security and compliance in file sharing and collaboration by detecting anomalies and potential threats. | 10/01/2024 | a) Purchased from a vendor | Kiteworks | Yes | Kiteworks employs AI to enhance security and compliance in file sharing and collaboration by detecting anomalies and potential threats. | This is a subscription service from a third-party vendor and confidential business information of this nature is not available. | No | https://www.fhfa.gov/document/fhfa-infrastructure-general-support-system-pia-11-20-2023.pdf | k) None of the above | No | https://www.fhfa.gov/document/fhfa-infrastructure-general-support-system-pia-11-20-2023.pdf | |||||||||||||
| Federal Housing Finance Agency | OCIO | FHFA 16 | Network monitoring, anomaly detection through Whats Up Gold and Flowmon integration | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | Classical/Predictive Machine Learning | Uses AI for network visibility, anomaly detection, and security through Flowmon integration | Risk Reduction | Uses AI for network visibility, anomaly detection, and security through Flowmon integration. | 10/01/2022 | a) Purchased from a vendor | Flowmon | Yes | Uses AI for network visibility, anomaly detection, and security through Flowmon integration. | This is a subscription service from a third-party vendor and confidential business information of this nature is not available. | No | k) None of the above | No | |||||||||||||||
| Federal Housing Finance Agency | OCIO | FHFA 17 | Mathematical formula transcription using Commonlook Online | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | Generative AI | Commonlook Online is used to translate complex mathematical formula into plain language for publicly released documents. | Efficiency | Commonlook Online is used to translate complex mathematical formula into plain language for publicly released documents. | 10/01/2024 | a) Purchased from a vendor | Commonlook Online | No | Commonlook Online is used to translate complex mathematical formula into plain language for publicly released documents. | This is a subscription service from a third-party vendor and confidential business information of this nature is not available. | No | k) None of the above | No | |||||||||||||||
| Federal Housing Finance Agency | OCIO | FHFA 18 | Mobile Worker Productivity using Apple iOS platform | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | Natural Language Processing (NLP) | The Apple iOS platform provides multiple applications with NLP capability. This is a predictive word technology that makes an assumption in real time of the next word and allows for a single tap selection. | Efficiency | The Apple iOS platform provides multiple applications with NLP capability. This is a predictive word technology that makes an assumption in real time of the next word and allows for a single tap selection. | 08/01/2019 | a) Purchased from a vendor | Apple | Yes | The Apple iOS platform provides multiple applications with NLP capability. This is a predictive word technology that makes an assumption in real time of the next word and allows for a single tap selection. | This is a subscription service from a third-party vendor and confidential business information of this nature is not available. | No | https://www.fhfa.gov/document/2023-08-23-cisco-meraki-mdm-pia-public.pdf | k) None of the above | No | https://www.fhfa.gov/document/2023-08-23-cisco-meraki-mdm-pia-public.pdf | |||||||||||||
| Federal Reserve Board Of Governors | Division of Monetary Affairs | FRB-0001 | Economic Trend Modeling | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other – Economic & Financial | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Enhance economic forecasting accuracy beyond traditional statistical models to better predict economic events | Additional data beyond traditional statistical models | Multiple methods calculate the probability of a economic event in the next twelve months between 0 and 1 and are aggregated | 23/09/2019 | Developed in-house | Yes | Multiple methods calculate the probability of a economic event in the next twelve months between 0 and 1 and are aggregated | Internal – Fixed Income | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Information Technology | FRB-0003 | PDF Optical Character Recognition (Text) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Automate extraction of structured column data from unstructured PDF reports to reduce manual processing time and errors | The intended purpose is to extract and to decide the correct column name extracted from a report in PDF format | The list of the appropriate column names in text format | 26/09/2024 | Developed in-house | Yes | The list of the appropriate column names in text format | Internal Documents | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Information Technology | FRB-0004 | PDF Optical Character Recognition (Images) | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | PDF OCR (Images) – Extract textual information from images embedded within PDF documents for comprehensive document analysis | The intended purpose is to extract text from the image embedded in PDF file | The AI system's output is the image details in text format for further analysis | The AI system's output is the image details in text format for further analysis | |||||||||||||||||||||
| Federal Reserve Board Of Governors | Division of Research and Statistics | FRB-0005 | Stock Market Analysis | Pre-deployment – The use case is in a development or acquisition status. | Other – Economic & Financial | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing | Quantify relationships between news media narratives and stock market volatility to improve market event understanding | To understand media narratives associated with stock market events | Time series measuring association of different news topics frequencies with market volatility | Time series measuring association of different news topics frequencies with market volatility | |||||||||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0009 | Commercial Real Estate Index | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other – Economic & Financial | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Synthesize commercial real estate data points into a comprehensive market index for improved market monitoring | Identify principal component using various data points to create a market index for commercial real estate | A principal component, which then gets used as the index | 01/10/2023 | Developed in-house | Yes | A principal component, which then gets used as the index | External – Real Estate | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0010 | Variable Optimization | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other – Economic & Financial | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Determine optimal lag structures to enhance accuracy for forecasting call report metrics | Iterate through alternative lag structures to identify the optimal one for forecasting call report metrics | A forecast for select call report metrics | 28/09/2023 | Developed in-house | Yes | A forecast for select call report metrics | Internal – FFIEC Call Report | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Research and Statistics | FRB-0011 | Credit Fragment Analysis | Pre-deployment – The use case is in a development or acquisition status. | Other – Economic & Financial | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identify meaningful patterns in fragmented data to support more comprehensive research analysis | Research on credit fragments | Identification of observations | Identification of observations | |||||||||||||||||||||
| Federal Reserve Board Of Governors | Office of the Secretary | FRB-0013 | Proposals and Public Comments | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other – Economic & Financial | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing | Efficiently process large volumes of public regulatory comments while ensuring proper handling of sensitive information | The Board of Governors of the Federal Reserve System (Board) is developing the Proposals and Public Comments (PPC) system to electronically process and manage comments from the public on regulatory rulemakings, information collections, and other proposals (collectively, proposals) and to post those comments to the Board’s public website. The first release of PPC is expected in mid-November 2024. The Board’s processing of comments may use artificial intelligence (AI) to provide more efficient processing of public comments (e.g., PII redaction recommendations, spam detection). | The Board’s processing of comments may use artificial intelligence (AI) to provide more efficient processing of public comments (e.g., PII redaction recommendations, sentiment analysis, text matching, entity identification, and text similarity matching). A human verifies the system recommendations. | 16/11/2024 | Developed in-house | Yes | The Board’s processing of comments may use artificial intelligence (AI) to provide more efficient processing of public comments (e.g., PII redaction recommendations, sentiment analysis, text matching, entity identification, and text similarity matching). A human verifies the system recommendations. | External – Public Comments | Yes | https://www.federalreserve.gov/files/pia_ppc.pdf | None of the above | Yes | https://www.federalreserve.gov/files/pia_ppc.pdf | ||||||||||||
| Federal Reserve Board Of Governors | Division of Research and Statistics | FRB-0016 | Manufacturer Sentiment Analysis | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other – Economic & Financial | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing | Convert qualitative survey responses into quantitative insights on industrial production forecasts | To quantify survey responses related to forecasts of industrial production | Time series measuring the sentiment of manufacturing via survey respondents | 01/07/2022 | Developed in-house | Yes | Time series measuring the sentiment of manufacturing via survey respondents | External – Manufacturing | Yes | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Research and Statistics | FRB-0017 | Supply Chain Estimations | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other – Economic & Financial | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing | Identify and measure supply chain constraints that could impact broader economic performance | To estimate supply chain bottlenecks | Time series measuring supply chain bottleneck sentiment data | 01/12/2022 | Developed in-house | Yes | Time series measuring supply chain bottleneck sentiment data | Internal – Beige Book | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Office of the Inspector General | FRB-0021 | Body Worn Cameras Data Management System | Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | c) Not high-impact | Not high-impact | Generative AI | Body Worn Cameras Data Management – Ensure proper handling of information in body camera footage while making content searchable | Audio recordings may be transcribed to text and/or redacted. Transcripts are then labeled “unverified” until an OIG designated member reviews, edits and approves the final transcript. Automatically detects and redacts screens (computer screens, digital signs), faces and license plates captured. Prior to redacting any evidence, a special agent must first approve the redactions before sharing any evidence. | Text versions of spoken word | 01/07/2024 | Purchased from a vendor | Third Party Vendor | No | Text versions of spoken word | Yes | https://www.federalreserve.gov/files/pia_body_worn_cameras_oig.pdf | Other | No | https://www.federalreserve.gov/files/pia_body_worn_cameras_oig.pdf | ||||||||||||
| Federal Reserve Board Of Governors | Division of Research and Statistics | FRB-0022 | Market Fund Portfolio | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other – Economic & Financial | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing | Accurately identify security issuers in money market fund portfolios | To assist with issuer classification of money market funds | A list of likely issuers of a particular security | 01/05/2021 | Developed in-house | Yes | A list of likely issuers of a particular security | External – SEC Filings | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Monetary Affairs | FRB-0024 | Sentiment Analysis of Earnings Transcripts | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other – Economic & Financial | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing | Measure and classify sentiment in bank earnings calls to identify emerging trends | Additional insight into sentiments related to topics in bank earnings calls | The probability that text inputs are positive, negative, or neutral and classifies them as such | 01/10/2024 | Developed in-house | Yes | The probability that text inputs are positive, negative, or neutral and classifies them as such | Internal – Earning Transcripts | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0025 | Anomaly Detection | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other – Economic & Financial | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing | Identifying irregular patterns in firm-submitted information based on historical submission patterns | Improve the iterative data quality process | Suggested messages regarding data quality of subsets of firm submitted data based on similar historical messages | 01/12/2022 | Developed in-house | Yes | Suggested messages regarding data quality of subsets of firm submitted data based on similar historical messages | Internal – QIS | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Consumer and Community Affairs | FRB-0026 | Consumer Complaints Explorer | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other – Economic & Financial | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing | Categorize large volumes of consumer complaints into topics to facilitate appropriate analysis and response | Improve the classification of consumer complaints into topics using topic modeling | Gamma value, topic number, and top five terms for the topic number for each narrative | 01/01/2019 | Developed in-house | Yes | Gamma value, topic number, and top five terms for the topic number for each narrative | External – CFPB Consumer Complaints | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0028 | Regulatory Data Analysis | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other – Economic & Financial | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identify potential reporting anomalies by comparing current submitted values against expected ranges based on historical patterns | Improved data quality for stakeholders | Values for various predicted percentile levels for a given reporter are provided to an analyst to compare to current reported values | 09/09/2024 | Developed in-house | Yes | Values for various predicted percentile levels for a given reporter are provided to an analyst to compare to current reported values | Internal – Firm Data | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0029 | Decision Tree for Deposits Data | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other – Economic & Financial | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Detect outliers in current reporting data to enhance overall data quality and reliability | Improved process efficiency and data quality. | Predetermined variables are calculated and then filtered to identify potential outliers in the current reporting period data | 29/08/2024 | Developed in-house | Yes | Predetermined variables are calculated and then filtered to identify potential outliers in the current reporting period data | Internal – H.6 Money Stock Measures | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0031 | Novel Activities Call Report Classification | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other – Economic & Financial | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identify emerging or non-traditional banking activities | The random forest model would parse out what reported categories most significantly predict inclusion on novel banking activity lists | The model identifies call report line items that are correlated with the banks on internal supervisory lists and classifies banks based on their statistical similarity to banks engaged in novel activities | 01/07/2024 | Developed in-house | Yes | The model identifies call report line items that are correlated with the banks on internal supervisory lists and classifies banks based on their statistical similarity to banks engaged in novel activities | Internal – FDIC Deposits | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0032 | Financial News Processing | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other – Economic & Financial | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing | Transform large volumes of unstructured financial news into structured, actionable insights | Assist with financial news processing | Interactive dashboards | 01/09/2024 | Developed in-house | Yes | Interactive dashboards | External – News Feed | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0033 | Bank Exam Quality Control – Model 1 | Pre-deployment – The use case is in a development or acquisition status. | Other – Economic & Financial | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing | Enhance quality control by incorporating external news data into performance assessment models | A traditional machine learning NLP model of bank performance that utilizes information in news articles among various inputs | Model outcomes are used as a component of quality control | Model outcomes are used as a component of quality control | |||||||||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0034 | Document Summarization Statistics | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other – Economic & Financial | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing | Extract meaningful metrics and term frequencies from lengthy documents to identify trends and common issues | Provide statistics about bank review letters based on the length of the letters and the frequency of common financial terms | Statistics on letters contained in a PDF | 22/12/2024 | Developed in-house | Yes | Statistics on letters contained in a PDF | Internal – Firm Data | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0035 | Writing Quality Analysis Model | Pre-deployment – The use case is in a development or acquisition status. | Other – Economic & Financial | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing | Ensure consistent, high-quality writing across organizational documents through automated style and quality assessment | A NLP model to help leaders better understand the writing quality and consistency of documents | Provides analysis score of consistency of writing style | Provides analysis score of consistency of writing style | |||||||||||||||||||||
| Federal Reserve Board Of Governors | Division of Information Technology | FRB-0036 | Comment Review System | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other – Economic & Financial | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing | Manage, analyze, and categorize large volumes of public comments | The Comment Review System (CRS) is a system used by the Board of Governors of the Federal Reserve System (“Board”) to electronically process and manage comments from the public on regulatory rulemakings, information collections, and other proposals (collectively, “proposals”). The Board’s processing of comments may use artificial intelligence (AI) to provide more efficient processing of public comments (e.g., text matching, entity identification, and text similarity matching). | To assist the analyst in reviewing each comment, CRS uses traditional machine learning natural language processing (NLP) to provide text summarization, text matching with lists of topics, entity identification, and text similarity matching. In addition, CRS identifies duplicative or near-duplicative comment letters, provides full text search (including metadata properties), and provides the optionality of providing notes or labelling comments based on various metadata attributes. All public comments are reviewed in their entirety, and summaries are used to assist with these reviews. | 01/07/2021 | Developed in-house | Yes | To assist the analyst in reviewing each comment, CRS uses traditional machine learning natural language processing (NLP) to provide text summarization, text matching with lists of topics, entity identification, and text similarity matching. In addition, CRS identifies duplicative or near-duplicative comment letters, provides full text search (including metadata properties), and provides the optionality of providing notes or labelling comments based on various metadata attributes. All public comments are reviewed in their entirety, and summaries are used to assist with these reviews. | External – Public Comments | Yes | https://www.federalreserve.gov/files/pia_crs.pdf | None of the above | Yes | https://www.federalreserve.gov/files/pia_crs.pdf | ||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0037 | Trading Desk Grouping | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other – Economic & Financial | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing | Categorize wide-ranging trading desk operations by asset class | Grouping Trading Desk Descriptions | An asset class type for each desk | 01/04/2024 | Developed in-house | Yes | An asset class type for each desk | External – Banking Data | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0041 | Threshold Monitoring | Pre-deployment – The use case is in a development or acquisition status. | Other – Economic & Financial | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Timely identification of threshold limits | Enhance monitoring on limits thresholds, faster collaboration | Tools serve analysts to enhance monitoring on limits thresholds and actuals | Tools serve analysts to enhance monitoring on limits thresholds and actuals | |||||||||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0042 | Supply and Demand Tool | Pre-deployment – The use case is in a development or acquisition status. | Other – Economic & Financial | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Optimize resource allocation based on defined rules to better balance supply and demand constraints | The tool is used to monitor and project resource and supply demand based on defined rule sets | Resource monitor based on rule outcomes | Resource monitor based on rule outcomes | |||||||||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0043 | Outlier Detection | Pre-deployment – The use case is in a development or acquisition status. | Other – Economic & Financial | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Efficiently process large volumes of firm data to identify anomalies and extract meaningful insights | Traditional AI to help identify, synthesize, and deliver information provided by firms. | Identify, synthesize, and deliver information from firms | Identify, synthesize, and deliver information from firms | |||||||||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0044 | Earnings Call Topic Model | Pre-deployment – The use case is in a development or acquisition status. | Other – Economic & Financial | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing | Systematically categorize earnings call content to enable more efficient and comprehensive analysis | Model used to classify topics from earnings calls | Classify information from Earnings Calls | Classify information from Earnings Calls | |||||||||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0045 | Bank Performance Monitoring | Pre-deployment – The use case is in a development or acquisition status. | Other – Economic & Financial | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Enhance capabilities through econometric modeling based on historical performance data | Model used to support off site risk analysis | Econometric model that supplements offsite risk analysis based on historical ratings | Econometric model that supplements offsite risk analysis based on historical ratings | |||||||||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0047 | Bank Exam Quality Control – Model 2 | Pre-deployment – The use case is in a development or acquisition status. | Other – Economic & Financial | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing | Incorporating external news data into modeling | A traditional machine learning NLP model of bank ratings that utilizes information in news articles among various inputs | Model outcomes are used as a component of quality control | Model outcomes are used as a component of quality control | |||||||||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0049 | Risk Rating Model – Community Banks | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other – Economic & Financial | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Develop more accurate classification framework for community banks to appropriately tailor strategies | To improve upon the classification framework used to tier community banks according to risk, and to tailor examination intensity | Statistical methods are used in the variable selection process for a model | 30/10/2023 | Developed in-house | Yes | Statistical methods are used in the variable selection process for a model | Internal – FFIEC Call Report, Uniform Bank Performance Report | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0050 | Risk Rating Model | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other – Economic & Financial | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Improve risk models optimize resource allocation | Identify a set of factors that may be helpful in predicting the likelihood that a bank will experience an adverse outcome in the future, and to tier banks into High-, Moderate- and Low-risk tiers for review | Statistical methods are used in the variable selection process for a risk model | 02/10/2019 | Developed in-house | Yes | Statistical methods are used in the variable selection process for a risk model | Internal – FFIEC Call Report, Uniform Bank Performance Report | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Financial Stability | FRB-0051 | Financial System Data Analysis | Pre-deployment – The use case is in a development or acquisition status. | Other – Economic & Financial | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Model counterparty exposures across multiple institutions. | Enhance the ability to monitor trends in the financial system. | Model provides a calculated risk score | Model provides a calculated risk score | |||||||||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0052 | Bank Examiner Search Engine | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other – Economic & Financial | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing | Saving examiners time by providing requested information needed during bank examinations | Provide efficiencies for bank examiners during examinations by helping to provide information faster and at greater scale | Retrieval of requested documents in original unaltered form | 03/06/2025 | Developed in-house | Yes | Retrieval of requested documents in original unaltered form | Internal – FFIEC Call Report | No | None of the above | Yes | ||||||||||||||
| Federal Reserve Board Of Governors | Division of Supervision and Regulation | FRB-0053 | Oasis Semantic Search | Pre-deployment – The use case is in a development or acquisition status. | Other – Economic & Financial | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing | Manually searching for documents is a significant task for each business use case | Reduce search time for each business use case | Retrieval of search results and documents that may otherwise be excluded by a keyword alone. | Retrieval of search results and documents that may otherwise be excluded by a keyword alone. | |||||||||||||||||||||
| Federal Reserve Board Of Governors | Division of Management | FRB-0054 | Virtual Benefits Assistant | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Generative AI | Virtual Assistant is designed to answer questions on how to navigate the site and answer general questions | Get general answers on benefits and links to reference materials for additional information or benefits contact information | The system provides general responses and employees have the option to speak to a representative if they have more detailed questions. | 01/03/2025 | Purchased from a vendor | Benefits Manager | Yes | The system provides general responses and employees have the option to speak to a representative if they have more detailed questions. | Internal – Thrift Plan and Retirement Plan Documentation | No | None of the above | No | |||||||||||||
| Federal Reserve Board Of Governors | Division of Information Technology | FRB-0055 | Data Quality Prioritization Dashboard | Pre-deployment – The use case is in a development or acquisition status. | Other – Economic & Financial | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing | Edits are needed to call report data in a short period of time | Quicker outreach and data revision to enhance data quality for end users | Edit explanation and revision data from call reports | Edit explanation and revision data from call reports | |||||||||||||||||||||
| Federal Reserve Board Of Governors | Division of Monetary Affairs | FRB-0056 | Monitoring Earnings Conference Calls | Pre-deployment – The use case is in a development or acquisition status. | Other – Economic & Financial | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing | Need to identify mentions of using genAI alongside research and development | Information on R&D at reported organizations | Call transcripts with keywords | Call transcripts with keywords | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-1 | Anomaly Detection and Precursor Identification in UAV flight data | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This project is using past algorithms developed by the NASA ARC (Ames Research Center) Data Sciences Group and modifying them with application to identifying previously-unknown anomalies and precursors to known issues in UAV (Unmanned Aerial Vehicle) test flights. | This project is using past algorithms developed by the NASA ARC (Ames Research Center) Data Sciences Group and modifying them with application to identifying previously-unknown anomalies and precursors to known issues in UAV (Unmanned Aerial Vehicle) test flights. | previously-unknown anomalies and precursors to known issues in UAV (Unmanned Aerial Vehicle) test flights. | previously-unknown anomalies and precursors to known issues in UAV (Unmanned Aerial Vehicle) test flights. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-5 | Autoresolver/Tailored Arrival Manager | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | The Autoresolver system is a tool for autonomous air traffic management. It is designed to perform many of the tasks that air-traffic controllers have historically performed including maintaining separation between aircraft, sequencing and scheduling aircraft across locations in space, and avoiding airspace volumes like weather systems and restricted airspace. | The Autoresolver system is a tool for autonomous air traffic management. It is designed to perform many of the tasks that air-traffic controllers have historically performed including maintaining separation between aircraft, sequencing and scheduling aircraft across locations in space, and avoiding airspace volumes like weather systems and restricted airspace. | autonomous air traffic management | autonomous air traffic management | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-8 | Deep Learning for Flood Mapping (DELTA) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Science | Retired | c) Not high-impact | Not high-impact | Computer Vision | DELTA simplifies machine learning for satellite imagery. | Remotely sensed imagery is increasingly used by emergency managers to monitor and map the impact of flood events to support preparedness, response, and critical decision making throughout the flood event lifecycle. To reduce latency in delivery of imagery-derived information, ensure consistent and reliably derived map products, and facilitate processing of an increasing volume of remote sensed data-streams, automated flood mapping workflows are needed. A joint USGS-NASA-Univ. Alabama initiative developed DELTA and applied it to automatic near-real time flood detection, using multiple sources of satellite imagery for use in disaster response. | monitor and map the impact of flood events to support preparedness, response, and critical decision making throughout the flood event lifecycle | monitor and map the impact of flood events to support preparedness, response, and critical decision making throughout the flood event lifecycle | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-9 | Distributed Spacecraft Autonomy | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | Distributed Spacecraft Autonomy (DSA) is a project developed by the National Aeronautics and Space Administration that enables distributed spacecraft systems through the development of three capabilities: scalable communication, distributed coordination and planning, and human-swarm interaction. DSA will demonstrate these capabilities in two contexts. The first context is a flight demonstration consisting of a software payload hosted on the Starling-1 small-spacecraft mission. This software payload will use the on-board GPS receiver to perform in-situ, swarm-level reconfiguration in response to observed features in the Topside Ionosphere. The second context is a scalability study, which shows how the technologies developed in the flight demonstration can scale to a large number of spacecraft (≈ 100). The scalability demonstration applies the tools developed for the flight mission to a hardware-in-the-loop simulation of the flight software payload. | Distributed Spacecraft Autonomy (DSA) is a project developed by the National Aeronautics and Space Administration that enables distributed spacecraft systems through the development of three capabilities: scalable communication, distributed coordination and planning, and human-swarm interaction. DSA will demonstrate these capabilities in two contexts. The first context is a flight demonstration consisting of a software payload hosted on the Starling-1 small-spacecraft mission. This software payload will use the on-board GPS receiver to perform in-situ, swarm-level reconfiguration in response to observed features in the Topside Ionosphere. The second context is a scalability study, which shows how the technologies developed in the flight demonstration can scale to a large number of spacecraft (≈ 100). The scalability demonstration applies the tools developed for the flight mission to a hardware-in-the-loop simulation of the flight software payload. Distributed Spacecraft Systems are a type of multi-spacecraft mission architecture that can not only provide improved resolution, coverage, and availability of existing missions, but also enable missions that would be previously infeasible using traditional approaches. Autonomy is a critical need for these systems, since the cost associated with applying conventional approaches for command and control does not scale with the number of spacecraft. | DSA will demonstrate these capabilities in two contexts. The first context is a flight demonstration consisting of a software payload hosted on the Starling-1 small-spacecraft mission. This software payload will use the on-board GPS receiver to perform in-situ, swarm-level reconfiguration in response to observed features in the Topside Ionosphere. The second context is a scalability study, which shows how the technologies developed in the flight demonstration can scale to a large number of spacecraft (≈ 100). The scalability demonstration applies the tools developed for the flight mission to a hardware-in-the-loop simulation of the flight software payload. | DSA will demonstrate these capabilities in two contexts. The first context is a flight demonstration consisting of a software payload hosted on the Starling-1 small-spacecraft mission. This software payload will use the on-board GPS receiver to perform in-situ, swarm-level reconfiguration in response to observed features in the Topside Ionosphere. The second context is a scalability study, which shows how the technologies developed in the flight demonstration can scale to a large number of spacecraft (≈ 100). The scalability demonstration applies the tools developed for the flight mission to a hardware-in-the-loop simulation of the flight software payload. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-10 | ExoMiner discovery of ExoPlanets via data from the Kepler and TESS space telescopes | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | ExoMiner is being used to statistically validate exoplanets detected by the Kepler space telescope and to identify promising exoplanet candidates for the TESS space telescope | ExoMiner has been used to validated 370 new exoplanets identified in data from the Kepler space telescope. It is now being used to idenitify promising exoplanet candidates in TESS mission data. It promises to significantly reduce the manual effort required while improving the accuracy of identify promising TESS exoplanet candidates and to reject astrophysical false positives and instrumental false alarms. | ExoMiner produces scores between 0.0 and 1.0 for a number of possible categories relevant to transiting exoplanet searches. | ExoMiner produces scores between 0.0 and 1.0 for a number of possible categories relevant to transiting exoplanet searches. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-11 | Explainable and robust deep semi-supervised model for multi-class anomaly detection in flight data | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This model is a semi-supervised deep learning based anomaly detection for aircraft flight data. It is designed to work when a small subset of data is reviewed and labeled by experts. The most useful realm is where the size of labeled data is small, so that any supervised learning approach won't reach optimum performance. | This model is a semi-supervised deep learning based anomaly detection for aircraft flight data. It is designed to work when a small subset of data is reviewed and labeled by experts. The most useful realm is where the size of labeled data is small, so that any supervised learning approach won't reach optimum performance. | anomaly detection | anomaly detection | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-13 | Anomaly detection in aeronautics data with quantum-compatible discrete deep generative model | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Our team developed high-performance unsupervised deep machine-learning models for the detection of flight-operations anomalies. The models’ engineered-feature (latent) spaces are composed of discrete variables, which allows an integration with quantum computing because (part of) the latent-space variables can be populated by quantum-state measurements, which are discrete in nature. | This project enabled the additional development of two quantum-capable unsupervised deep-learning models with discrete latent space (Bernoulli and Boltzmann priors). The models exhibit state-of-the-art anomaly-detection performance and robustness. Future versions of our models will be deployed on in-time flight-operations data streams. They will also be used to assess the performance and resource requirements of quantum and other physical computing devices. | anomaly detection | b) Developed in-house | No | anomaly detection | in-flight operations data streams | No | No | ||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-18 | Machine Learning Airport Surface Model: Airport Configuration Prediction | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The ML-airport-configuration software is developed to provide a reference implementation to serve as a research example how to train and register Machine Learning (ML) models intended for predicting airport configurations. | The software provides examples how to build three distinct pipelines for data query and save, data engineering, and data science. These pipelines enable scalable, repeatable, and maintainable development of ML models. | The software is designed to point to databases which are not provided as part of the software release and thus this software is only intended to serve as an example of best practices. | The software is designed to point to databases which are not provided as part of the software release and thus this software is only intended to serve as an example of best practices. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-19 | Machine Learning Airport Surface Model: Arrival Runway Prediction | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The ML-airport-arrival-runway software is developed to provide a reference implementation to serve as a research example how to train and register Machine Learning (ML) models intended for predicting arrival runway assignments. | The software provides examples how to build three distinct pipelines for data query and save, data engineering, and data science. These pipelines enable scalable, repeatable, and maintainable development of ML models. | The software is designed to point to databases which are not provided as part of the software release and thus this software is only intended to serve as an example of best practices. | The software is designed to point to databases which are not provided as part of the software release and thus this software is only intended to serve as an example of best practices. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-20 | Machine Learning Airport Surface Model: Departure Runway Prediction | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The ML-airport-departure-runway software is developed to provide a reference implementation to serve as a research example how to train and register Machine Learning (ML) models intended for predicting departure runway assignments. | The software provides examples how to build three distinct pipelines for data query and save, data engineering, and data science. These pipelines enable scalable, repeatable, and maintainable development of ML models. | The software is designed to point to databases which are not provided as part of the software release and thus this software is only intended to serve as an example of best practices. | The software is designed to point to databases which are not provided as part of the software release and thus this software is only intended to serve as an example of best practices. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-21 | Machine Learning Airport Surface Model: Estimated ON Time Prediction | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The ML-airport-estimated-ON software is developed to provide a reference implementation to serve as a research example how to train and register Machine Learning (ML) models intended for predicting landing time. | The software provides examples how to build three distinct pipelines for data query and save, data engineering, and data science. These pipelines enable scalable, repeatable, and maintainable development of ML models. | The software is designed to point to databases which are not provided as part of the software release and thus this software is only intended to serve as an example of best practices. The software is built in python and leverages open-source libraries kedro, scikitlearn, MLFlow, and others. | The software is designed to point to databases which are not provided as part of the software release and thus this software is only intended to serve as an example of best practices. The software is built in python and leverages open-source libraries kedro, scikitlearn, MLFlow, and others. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-22 | Machine Learning Airport Surface Model: Taxi-in Prediction | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The ML-airport-taxi-in software is developed to provide a reference implementation to serve as a research example how to train and register Machine Learning (ML) models intended for four distinct use cases: 1) unimpeded AMA taxi in, 2) unimpeded ramp taxi in, 3) impeded AMA taxi in, and 4) impeded ramp taxi in. | The software provides examples of how to build three distinct pipelines for data query and save, data engineering, and data science. These pipelines enable scalable, repeatable, and maintainable development of ML models. | The software is designed to point to databases which are not provided as part of the software release and thus this software is only intended to serve as an example of best practices. The software is built in python and leverages open-source libraries kedro, scikitlearn, MLFlow, and others. | The software is designed to point to databases which are not provided as part of the software release and thus this software is only intended to serve as an example of best practices. The software is built in python and leverages open-source libraries kedro, scikitlearn, MLFlow, and others. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-23 | Machine Learning Airport Surface Model: Taxi-out Prediction | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The ML-airport-taxi-out software is developed to provide a reference implementation to serve as a research example how to train and register Machine Learning (ML) models intended for predicting impeded and unimpeded taxi out duration. | The software provides examples of how to build three distinct pipelines for data query and save, data engineering, and data science. These pipelines enable scalable, repeatable, and maintainable development of ML models. | The software is designed to point to databases which are not provided as part of the software release and thus this software is only intended to serve as an example of best practices. The software is built in python and leverages open-source libraries kedro, scikitlearn, MLFlow, and others. | The software is designed to point to databases which are not provided as part of the software release and thus this software is only intended to serve as an example of best practices. The software is built in python and leverages open-source libraries kedro, scikitlearn, MLFlow, and others. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-25 | NextGen Advanced Methods: ATCSCC Webinar Speech2Text and Analysis | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The Advanced Methods project explores the use of innovative and emerging technologies to drive post operational analysis of Traffic Management for aircraft. Technologies such as machine learning, (ML), artificial intelligence (AI), and advanced data analytics for use in improving the FAA’s traffic flow management. In this specific use case, our aim is to use deep learning to convert live ATCSCC webinar meeting conversation to text, and then apply natural language processing to the converted text data for later analysis and review. | The Advanced Methods project explores the use of innovative and emerging technologies to drive post operational analysis of Traffic Management for aircraft. Technologies such as machine learning, (ML), artificial intelligence (AI), and advanced data analytics for use in improving the FAA’s traffic flow management. In this specific use case, our aim is to use deep learning to convert live ATCSCC webinar meeting conversation to text, and then apply natural language processing to the converted text data for later analysis and review. | Text from speech to text and NLP of air traffic management content. | c) Developed with both contracting and in-house resources | NLP | Yes | Text from speech to text and NLP of air traffic management content. | traffic management data | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-26 | NextGen Data Analytics: Letters of Agreement | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Today, operation constraints are documented via Standard Operating Procedures (SOP) and Letters of Agreement (LOA) and are not made available to the public in a consistent manner. SOPs are specific to an air traffic control facility and specify the procedures necessary for safe operation in the sector. LOA outline agreements establishing procedures and responsibilities between two parties (including crossing restrictions, holding patters, emergency procedure coordination, etc.) The LOA/SOPs are published internally as scanned PDFs and are the responsibility of the facility to maintain. To reduce the manual effort of tagging the documents for ease of reference, there is an opportunity to use modern data analytics and machine learning to produce and disseminate constraints in a standardized manner. | Providing LOAs or SOPs to stakeholders will enable flight planners (pilots and vendors) to study or ingest this information and thereby plan flight trajectories that remain consistent with air traffic constraints. It is also fundamental to Next Gen capabilities to share accurate data for purposes of creating new noise abatement procedures; improve NAS information for common situational awareness and alignment to implement new tools to assist in future time-based flow management. | Test-based digitized SOPs and LOAs. | c) Developed with both contracting and in-house resources | FAA | Yes | Test-based digitized SOPs and LOAs. | documents | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-33 | Unsupervised anomaly detection in flight data with deep variational autoencoders | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This model is an unsupervised deep learning based anomaly detection for aircraft flight data based on variational autoencoders with convolutional architecture. The model is designed to find anomalies in multivaraite time-series and can work with heterogeneous data. | It is currently tested and validated in finding anomaly detection in flight's operational quality assurance data from commerical aircraft. | anomaly detection | anomaly detection | |||||||||||||||||||||
| National Aeronautics And Space Administration | GRC: Glenn Research Center | NASA-35 | Aero-Engines AI - a machine-learning app for aircraft engine system-performance prediction | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Aero-Engines AI is a Windows app that deploys machine-learning analytics to predict aircraft engine performance. | Aero-Engines AI is an easy-to-use and a time-saving tool for aircraft engine design-space exploration during the conceptual design stage. | Predicting engine TSFC, engine weight, core size, and turbomachinery stage counts | Predicting engine TSFC, engine weight, core size, and turbomachinery stage counts | |||||||||||||||||||||
| National Aeronautics And Space Administration | GRC: Glenn Research Center | NASA-40 | Graph Neural Networks for Airfoil Performance Prediction | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We are investigating the use of Graph Convolutional Neural Networks to learn relationship between airfoil coordinates and predict the performance for aerodynamics analysis. Inputs include the shape of the airfoil and outputs are the coefficient of lift, drag, and moment. | The impact is we have a new type of neural network architecture that we can potentially use for other projects. | Predictions of blade loss | Predictions of blade loss | |||||||||||||||||||||
| National Aeronautics And Space Administration | GRC: Glenn Research Center | NASA-41 | Inverse Design of Materials | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Discovering new materials is typically a mix of art and science, with timelines to create and robustly test a new material mix / manufacturing method ranging from ten to twenty years. This project seeks to enable rapid discovery, optimization, qualifaction and deployment of fit-for-purpose materials. | The project is currently being utilized in an NESC investigation to improve SLS core stage weld quality. The technology will be used to select experiments for a fully autonomous robotic lab that is currently being procured to design better insulating materials for electrified aircraft. Bayesian optimization frameworks predicts the next best simulation to run to minimize a target. For the PMC example, for instance, the objective was to minimize weight while maintaining a minimum margin of safety. | Outputs include recipes and approaches for new materials custom-tailored to applications with an 4x speedup for the overall materials discovery / design lifecycle, and potential 10x throughput for the same cycle based on parallizing discovery of multiple materials at once. | Outputs include recipes and approaches for new materials custom-tailored to applications with an 4x speedup for the overall materials discovery / design lifecycle, and potential 10x throughput for the same cycle based on parallizing discovery of multiple materials at once. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GRC: Glenn Research Center | NASA-42 | MADI - Strategic Foresight and Knowledge Management infrastructure for ARMD/TACP/CAS | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The CAS Discovery team is developing, testing and using digital infrastructure to meet the pace of expected deliverables in an automated, efficient, collaborative manner, including taking advantage of AI/ML capabilities. | A digital infrastructure would enable a curated problem database. It is essential to minimize rework due to not being able to find previous data, information, and analyses and having to repeat past work. This type of infrastructure would be adaptable to other innovation areas at NASA, especially those that are working in “problem space” rather than technology development space. In addition MADI is able to speculate on futures and generate scenarios. MADI is also able to provide recombinant innovation support to identify ways to use archived technologies and proposals in new ways. | Recommendations and associations as natural language output. | Recommendations and associations as natural language output. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-64 | Applying machine learning techniques to enhance the sensitivity and selectivity of multifunctional sensor platform | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This work applies machine learning techniques to enhance the sensitivity and selectivity of the multifunctional sensor platform, a very small circuit board which includes multiple tiny sensors producing a variety of data. | Applying machine learning techniques to enhance the sensitivity and selectivity of multifunctional sensor platform | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-66 | Artificial Intelligent Co-Processor Slice with Google Coral TPU For Automated Onboard Data-Product Generation | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The Objective of this proposal is to enable the use of state-of-the-art, experimental, artificial-intelligent (AI) microchip architectures such as the Google Coral TPU (Tensor Processing Unit) on a SmallSat platform. | Artificial Intelligent Co-Processor Slice with Google Coral TPU For Automated Onboard Data-Product Generation | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-80 | cFS High Performance Computing Framework (cFS HPCF) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Other | The core Flight System (cFS) High Performance Computing Framework (HPCF) provides an environment to support a wide variety of Science work, to inlcude AI and ML. | cFS High Performance Computing Framework (cFS HPCF) | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-89 | Creating U-Net + LSTM Hybrid Architecture for Image Series Analysis | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | This work focuses on creating U-Net + long short-term memory Hybrid analysis architectures for image series analysis on solar image data. | Creating U-Net + LSTM Hybrid Architecture for Image Series Analysis | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-93 | Deep Learning for Communication-Limited Spacecraft | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The objective of this project is to investigate the feasibility of applying deep-learning algorithms to communication-limited spacecraft, an operational domain where a slow, restricted, or intermittent downlink bottleneck inhibits the generation of large training datasets on the ground. With novel complex sensors generating ever-increasing amounts of data, it is imperative to be able to autonomously and robustly classify scientifically useful data to maximize scientific utility per bit transmitted to the ground. This project studies two classification approaches, including supervised transfer learning and unsupervised feature extraction followed by clustering, to optimize selection of data products for download. | Deep Learning for Communication-Limited Spacecraft | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-97 | Development of new tools for detecting and assessing resilient agriculural systems farm performance | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This work uses convolutional long short-term memory neural networks to aid in development of new tools for detecting and assessing resilient agriculural systems farm performance based on a variety of Earth and agricultural sensor data. | Development of new tools for detecting and assessing resilient agriculural systems farm performance | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-100 | Earth Information System Fire Pilot | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | EIS-Fire developed a visualization and analysis data portal providing access to a variety of fire data products in a cloud-optimized, analysis-ready format. The team also developed an algorithm for mapping active fire perimeters from satellite fire detections and updated the representation of fire emissions in the Goddard Earth Observing System (GEOS) model to use improved Visible Infrared Imaging Radiometer Suite (VIIRS) fire data. Finally, EIS-Fire engaged stakeholders in identifying use cases to inform data portal development. | Earth Information System Fire Pilot | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-105 | Enabling Autonomous Differential Drag Control and Attitude Maneuvering Using Onboard Artificial Intelligence (AI) for SmallSat Distributed Space Missions (DSMs) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This work is in enabling Autonomous Differential Drag Control and Attitude Maneuvering Using onboard Artificial Intelligence (AI) for SmallSat Distributed Space Missions (DSMs). AI systems include autonomous navigation & control, with inputs including location, state, and systems data. | Enabling Autonomous Differential Drag Control and Attitude Maneuvering Using Onboard Artificial Intelligence (AI) for SmallSat Distributed Space Missions (DSMs) | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-113 | Forest responses to climate change | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This work explores a variety of Physics-Guided ML techniques to contribute to terrestrial ecosystem and carbon cycle modeling and related fields. | Forest responses to climate change | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-123 | From Single Spacecraft to Synchronized Swarms: Fault Diagnosis for Distributed Spacecraft Mission Resilience | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | From Single Spacecraft to Synchronized Swarms: Fault Diagnosis for Distributed Spacecraft Mission Resilience. This project is in AI/ML-assisted fault detection and diagnosis to complement human capability for diagnosing issues. | From Single Spacecraft to Synchronized Swarms: Fault Diagnosis for Distributed Spacecraft Mission Resilience | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-127 | HEASARC X-ray Spectra and Light Curve data sets | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This work creates and makes available X-ray Spectra and Light Curve data sets for ML analysis from the High Energy Astrophysics Science Archive Research Center (HEASARC), which includes data sets from multiple space-based observatories. | HEASARC X-ray Spectra and Light Curve data sets | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-128 | High Rate Digital Spectrometer Enhancement with Neural Network AI | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | As spaceborne spectrometer increases in spectral resolution, the growth in spectral data volume and the limited space to Earth communication bandwidth are prone to be a problem for achieving higher fidelity science measurement. To effectively utilize the limited communication bandwidth while reducing the loss of science data, a smart on-board neural network processing that had been successful in the field of machine learning and artificial intelligence (AI) is proposed to potentially reduce the spectral data volume, retaining the essential spectral information and mitigating the effect of human introduced radio signal interference. | High Rate Digital Spectrometer Enhancement with Neural Network AI | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-129 | High Resolution Earth and Planetary Atmospheric Predictions using Machine Learning | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We are developing a machine learning (ML) tool that can predict high-resolution in situ atmospheric conditions using relatively lower resolution remote data e.g., from an orbiting spacecraft. | Our ML tool could be used to map and track atmospheric cycling on Earth and planetary bodies, not only as a fundamental science tool, but also as a mechanism for tracking planetary weather from orbit. | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-133 | HyperMapping with Hyperspectral Precise Pointing Optical Sensor (HYPPOS) for Decadal Survey Mission Pathfinder Activities | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This work uses HyperMapping techniques on Hyperspectral Precise Pointing Optical Sensor (HYPPOS) data for Decadal Survey Mission Pathfinder Activities. The effort is developing AI tools using Google GPUs and NVIDIA TPUs with thermal imagery to enable curiosity-driven remote sensing of hyperspectral features using an optical pointing hyperspectral spectrometer component. | Twin experiments with archived thermal and hyperspectral imagery are being used to develop and test novel AI pointing algorithms. | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-143 | Knowledge Capture: helionauts.org | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This work uses natural language processing to perform text-based knowledge capture for the helionauts.org community, which is a cloud-based community of practice for Heliophysics experts. | Knowledge Capture: helionauts.org | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-148 | M4OPT: Multi-Mission Multi-Messenger Observation Planning Toolkit | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | M4OPT: Multi-Mission Multi-Messenger Observation Planning Toolkit. This work includes considerations for data integration and use across multiple Science missions and leveraging multiple sensors, suitable for traditional human or AIML-assisted analysis. | M4OPT: Multi-Mission Multi-Messenger Observation Planning Toolkit | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-149 | Machine Intelligence for Small Satellites | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The MAtISSE project seeks to develop a new approach to rapid, real-time extraction and classification of photometric light curves using a modern differencing technique and advanced DL integrated onto a compact graphics processing unit. | MAtISSE will develop this technique, which has the potential to greatly reduce the amount of data transmitted by an observatory, for implementation on a future CubeSat-based science payload with a thorough assessment of power requirements vs. processing and communications bandwidth. This technology will be especially applicable to small, power-limited spacecraft and may enable observations and science return that would be challenging or even impossible otherwise. | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-154 | Machine Learning of Ocean Worlds Laboratory Analog Seawater Volatiles | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This work focuses on algorithm-development for software that can predict the composition of an ocean using mass spectra of volatile gases analyzed. | Development specific for ocean worlds exploration (e.g., Europa / Enceladus), but applicable for Earth. | composition of an ocean | composition of an ocean | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-161 | MERRAMax Automated feature selection | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | There is growing interest in using Intergovernmental Panel on Climate Change (IPCC)-class climate model outputs in ecological research. These models provide realistic, global representations of the climate system, projections for hundreds of variables (including Essential Climate Variables), and combine observations from an array of satellite, airborne, and in-situ sensors. Unfortunately, direct use of this important class of data has been limited due to the large size and complexity of model output collections, internal file complexity, and limited means for dynamically creating derived products of interest. To address these limitations, we have developed an AI-based stochastic convergence technology, called MERRA/Max, that combines HPC and Princeton's Maximum Entropy (MaxEnt) software to rapidly subset and identify potential drivers of change among the hundreds of variables in a climate model output collection. MERRA/Max reduces dimensionality by iteratively drawing on MaxEnt's capacity for feature selection to winnow randomly selected climate variables until a stable set of predictors is found. Preliminary work focuses on the MERRA reanalysis, a product of NASA's GEOS-5 modeling framework. At 1 petabyte in size, MERRA comprises over 700 climate variables and spans 1970 to the present at high temporal resolution. We evaluated MERRA/Max by modeling the bioclimatic envelope of Cassin's Sparrow using MERRA and BioClim variables. | MERRAMax Automated feature selection | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-166 | Multi-temporal analytic center framework | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Multi-temporal analytic center framework. This is an integrating mechanism rather than a specific AI or ML technology. | Multi-temporal analytic center framework | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-169 | Optimization of Machine Learning Algorithms for Lidar to Detect Aerosols, Clouds, and the PBL (Planetary Boundary Layer) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Optimization of Machine Learning Algorithms for Lidar to Detect Aerosols, Clouds, and the PBL (Planetary Boundary Layer) | Understanding aerosol and cloud conditions can enhance analysis of the Earth's climate system and this work uses ML to enhance signal to noise ratios, detect atmospheric features, and more. | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-171 | Parameter estimation, emulator building, model tuning and sensitivity analysis of the ocean carbon cycle using ML | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This work explores neural networks and other ML techniques for Earth Science topics and data sets related to global carbon cycle, ocean circulation, air-sea interactions and climate variability. | Parameter estimation, emulator building, model tuning and sensitivity analysis of the ocean carbon cycle using ML | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-172 | PIC-ASIC Spectrometer Demonstration for Ultrawideband and Hyperspectral Microwave Sounding | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This family of technologies can provide extremely small multi-sensor capabilities in areas such as ultrawideband and hyperspectral sensing. | PIC-ASIC Spectrometer Demonstration for Ultrawideband and Hyperspectral Microwave Sounding | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-175 | Portable flow inferencing device using Fourier Ptychography and Deep Learning for detection of biosignatures | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | This work is on a portable flow inferencing device using Fourier Ptychography and Deep Learning for detection of biosignatures based on sensors from NASA space probes. | Portable flow inferencing device using Fourier Ptychography and Deep Learning for detection of biosignatures | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-182 | RAMjET: RApid Machine lEarned Triage - AI to classify astrophysical phenomena in photometric lightcurves | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The MAtISSE project seeks to develop a new approach to rapid, real-time extraction and classification of photometric light curves using a modern differencing technique and advanced DL integrated onto a compact graphics processing unit. | MAtISSE will develop this technique, which has the potential to greatly reduce the amount of data transmitted by an observatory, for implementation on a future CubeSat-based science payload with a thorough assessment of power requirements vs. processing and communications bandwidth. This technology will be especially applicable to small, power-limited spacecraft and may enable observations and science return that would be challenging or even impossible otherwise. | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-183 | Reconstruction of cloud vertical structure | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This work explores the feasibility of solving atmospheric remote sensing problems with machine learning using conditional generative adversarial networks (CGANs), implemented using convolutional neural networks. We apply the CGAN to generating two-dimensional cloud vertical structures that would be observed by the CloudSat satellite-based radar, using only the collocated Moderate-Resolution Imaging Spectrometer measurements as input. | Reconstruction of cloud vertical structure | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-185 | Research in Aritificial Intelligence for Spacecraft Resilience (RAISR) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This work is in AI/ML-assisted fault detection to complement human capability for diagnosing issues. | Research in Aritificial Intelligence for Spacecraft Resilience (RAISR) | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-187 | Science Translation with AI | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This work uses natural language processing and bidirectional encoder representations fromm tansformers (BERT) to perform Science Translation on relevant documents or text files. | Science Translation with AI | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-189 | Semi-Automatic Landslide Detection (SALaD) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Semi-Automatic Landslide Detection (SALaD). NASA's Semi-Automatic Landslide Detection (SALaD) system combines three leading-edge technologies: open-source Python packages and modules, object-based image analysis (OBIA), and machine learning (ML). | Semi-Automatic Landslide Detection (SALaD) | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-192 | SpectraClass: Semi-supervised learning in a Jupyter Notebook | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This is a workbench supporting interactive visual data analysis of sensor data. | It provides an extendable interface and toolsuite which can be used to jumpstart the development of novel methods for addressing a wide range of data analysis challenges in both the earth and space sciences. | We have chosen, as a science driver for the initial stage of development, the development of innovative semi-supervised machine learning methods for landscape classification using hyperspectral imagery. | We have chosen, as a science driver for the initial stage of development, the development of innovative semi-supervised machine learning methods for landscape classification using hyperspectral imagery. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-198 | SWaP-efficient, fast-wavelength-steering and time-division-multiplexing lidar technology capable of multi-beam ranging and concurrent hyper-spectral imaging | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We propose to integrate next-generation lidar, compact hyperspectral imaging and Artificial Intelligence (AI) technologies to provide a new remote sensing measurement capability for a broad range of Earth and planetary science objectives. | A groundbreaking lidar module consisting of a high-pulse-rate, fast-wavelength-tuning fiber laser and time-division-multiplexing receiver will make height measurements using 60 steerable beams with drastically increased efficiency compared to the state-of-the-art. Concurrent hyperspectral imaging will greatly enhance science capabilities and real-time, AI neural network analysis of the images will enable optimized data collection and on-board processing. Incorporating these multiple emerging technologies will substantially reduce instrument size, weight and power thereby enabling lower-cost SmallSat-class missions with enhanced scientific return compared to the present day | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-201 | Toward Resilient Spacecraft: Artificially Intelligent Reasoning for Diagnosing Safe Modes | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This research will develop a proof-of-concept prototype for an intelligent and explainable reasoning system on-board, which is capable of diagnosis and decision making in real-time. | We will look specifically at implementing our proof-of-concept as a way to diagnose spacecraft safe-mode events, in an attempt to disambiguate non-urgent anomalous safe mode events from more urgent safe-mode events which require a sunward burn maneuver. | The framework will leverage classification systems such as Neural Networks (NN) and various uncertainty processing methods such as Dempster–Shafer theory (DST) to make use of low level data as a way to inform higher level reasoning system to generate on-board information at an abstract level which is human readable. | The framework will leverage classification systems such as Neural Networks (NN) and various uncertainty processing methods such as Dempster–Shafer theory (DST) to make use of low level data as a way to inform higher level reasoning system to generate on-board information at an abstract level which is human readable. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-202 | Towards Scientific Autonomy: Applying Machine Learning to MOMA Science Data | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This work progresses NASA toward Scientific Autonomy by Applying Machine Learning to MOMA (Mars Organic Molecule Analyzer) Science Data to help search for signs of life. | Towards Scientific Autonomy: Applying Machine Learning to MOMA Science Data | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-204 | Training Data for Streamflow Estimation | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This work uses convolutional neural networks to refine training data for streamflow estimation. The project will implement, test, and operationalize a system to derive effective stream width data using data from ESA's Sentinel-1 C-band radar satellite constellation, archive the data produced, and distribute the data for free and open use to train machine learning models relating to stream flow and effective stream width. | Training Data for Streamflow Estimation | To create training data for machine learning using the European Space Agency's (ESA) Sentinel-1 C-band SAR data from ASF DAAC's growing cloud-based SAR data archive and new Sentinel-1 data as it is received. | To create training data for machine learning using the European Space Agency's (ESA) Sentinel-1 C-band SAR data from ASF DAAC's growing cloud-based SAR data archive and new Sentinel-1 data as it is received. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-208 | Unification of laboratory and observational data via learning algorithms for robust models of ice microphysics | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This work applies neural networks and variational autoencoders to a variety of weather analysis | Unification of laboratory and observational data via learning algorithms for robust models of ice microphysics | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-210 | Using Machine Learning to Detect and Build Calibrated CME Datasets | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | This work uses "you only look once" v.3 Machine Learning techniques to detect and build calibrated CME (coronal mass ejection) datasets based on other solar sensor data. | Using Machine Learning to Detect and Build Calibrated CME Datasets | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-213 | Vlasov Informed Super Resolution (VISR) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This work uses a physics informed neural network to gain insight into the Vlasov equation (key to plasma physics) based on plasma data from the Magnetospheric Multiscale mission sensors. | Vlasov Informed Super Resolution (VISR) | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-214 | AEGIS: Autonomous Exploration for Gathering Increased Science | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | AEGIS enables intelligent targeting and data acquisition by planetary rovers. It uses computer vision techniques to identify targets (e.g., rocks) in wide angles images of the rover's surrounding terrain. If targets are found that match scientists specificiations, they are then measured autonomously using remote sensing instruments. AEGIS was first used on the MER Mission. It is currently in use on the MSL Mission to acquire data for the ChemCam instrument. It is planned for use in Spring of 2022 on the M2020 Mission to acquire data for the SuperCam instrument. | AEGIS enables intelligent targeting and data acquisition by planetary rovers. It uses computer vision techniques to identify targets (e.g., rocks) in wide angles images of the rover's surrounding terrain. If targets are found that match scientists specificiations, they are then measured autonomously using remote sensing instruments. AEGIS was first used on the MER Mission. It is currently in use on the MSL Mission to acquire data for the ChemCam instrument. It is planned for use in Spring of 2022 on the M2020 Mission to acquire data for the SuperCam instrument. | Recommendations of relevant objects, e.g., Mars rocks, for scientific examination. | 01/01/2010 | b) Developed in-house | No | Recommendations of relevant objects, e.g., Mars rocks, for scientific examination. | wide angles images of the rover's surrounding terrain | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-215 | Agile Science | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This project seeks to enable agile science to be conducted by remote, autonomous spacecraft beyond range of low-latency human control. | Spacecraft systems will have to be able to conduct onboard analysis of sensor data & images to choose scientific targets of opportunity, conduct on-board prioritization, conduct geometric reasoning, and implement planning, scheduling, and execution. Future missions to primitive bodies and deep space exploration may have limited time to explore unknown targets and to react/adapt to new science opportunities. | choose scientific targets of opportunity, conduct on-board prioritization, conduct geometric reasoning, and implement planning, scheduling, and execution | choose scientific targets of opportunity, conduct on-board prioritization, conduct geometric reasoning, and implement planning, scheduling, and execution | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-217 | ASPEN Mission Planner | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Agentic AI | Based on AI techniques, ASPEN is a modular, reconfigurable application framework which is capable of supporting a wide variety of planning and scheduling applications. ASPEN provides a set of reusable software components that implement the elements commonly found in complex planning/scheduling systems, including: an expressive modeling language, a resource management system, a temporal reasoning system, and a graphical interface. | ASPEN has been used for many space missions including: Modified Antarctic Mapping Mission, Orbital Express, Earth Observing One, and ESA's Rosetta Orbiter. | Plan and schedule recommendations to optimize Science from Scientific Sensors. | 01/01/2002 | b) Developed in-house | No | Plan and schedule recommendations to optimize Science from Scientific Sensors. | Scientific Sensors | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-221 | CLASP Coverage Planning & Scheduling | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The Compressed Large-scale Activity Scheduling and Planning (CLASP) project is a long-range scheduler for space-based or aerial instruments that can be modeled as pushbrooms 1D line sensors dragged across the surface of the body being observed. It addresses the problem of choosing the orientation and on/off times of a pushbroom instrument or collection of pushbroom instruments such that the schedule covers as many target points as possible, but without oversubscribing memory and energy. Orientation and time of observation is derived from geometric computations that CLASP performs using the SPICE ephemeris toolkit. | CLASP allows mission planning teams to start with a baseline mission concept and simulate the mission's science return using models of science observations, spacecraft operations, downlink, and spacecraft trajectory. This analysis can then be folded back into many aspects of mission design -- including trajectory, spacecraft design, operations concept, and downlink concept. The long planning horizons allow this analysis to span an entire mission. Actively in use for optimized scheduling for the NISAR Mission, ECOSTRESS mission (study of water needs for plant areas), EMIT mission (mineralogy of arid dusty regions), OCO-3 (atmospheric CO2) and more as well as used for numerous missions analysis and studies (e.g. 100+). | Estimates of scientific mission outcomes / results, based on optimized scheduling of spacecraft and sensors. | 01/01/2008 | b) Developed in-house | No | Estimates of scientific mission outcomes / results, based on optimized scheduling of spacecraft and sensors. | surface of the body being observed | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-223 | Dynamic Targeting | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | In this approach, called Dynamic Targeting (DT), traditional broad swath instruments are supplemented by more focused instruments with narrow swath and/or limited duty cycle. These instruments use tracking information from other instruments to rapidly and continuously adjust their targeting and configuration to optimize their science. | We aim to develop the onboard technology necessary to allow the instruments to autonomously identify critical areas of interest (e.g. plumes, thermal anomalies) or avoidance (e.g. clouds) and retarget/reconfigure to increase science productivity while accounting for instrument operations constraints such as pointing/slewing, energy, thermal, and setup. With DT, missions could control viewing geometry to extract stereo and smart instruments could autonomously track an event during an overflight to gain a more complete picture of geophysical and other events as they evolve through time and space. DT could even be used with a single instrument to map out the extent of a plume by tracing across the outer edge of the plume distinguishing between plume and non plume signals. | identify critical areas of interest (e.g. plumes, thermal anomalies) or avoidance (e.g. clouds) | identify critical areas of interest (e.g. plumes, thermal anomalies) or avoidance (e.g. clouds) | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-225 | Enhanced AutoNav for Perseverance Rover on Mars | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | AutoNav on the Perseverance Rover autonomously plans a safe path based on stereo navigation camera images, based on multiple technologies including a tree search for decision making, Dijkstra algorithm for global path planning, stereo processing for 3D terrain reconstruction, and Approximate Clearance Evaluation (ACE) for safety checks. | AutoNav on the Perseverance Rover autonomously plans a safe path based on stereo navigation camera images, based on multiple technologies including a tree search for decision making, Dijkstra algorithm for global path planning, stereo processing for 3D terrain reconstruction, and Approximate Clearance Evaluation (ACE) for safety checks. | Recommended navigation path for Mars Rover. | 07/01/2020 | b) Developed in-house | No | Recommended navigation path for Mars Rover. | stereo navigation camera images | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-227 | Groundwater data interpolation in California’s Central Valley using multimodal data fusion and multivariate sequence-to-sequence transformation models | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We describe novel distributed Artificial Intelligence/Multi Agent algorithms to allocate observations in a constellation and compare their performance to centralized and highly distributed algorithms using realistic problem and orbit distributions. | We describe novel distributed Artificial Intelligence/Multi Agent algorithms to allocate observations in a constellation and compare their performance to centralized and highly distributed algorithms using realistic problem and orbit distributions. | compare their performance to centralized and highly distributed algorithms using realistic problem and orbit distributions. | compare their performance to centralized and highly distributed algorithms using realistic problem and orbit distributions. | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-228 | Hybrid On-Board and Ground-Based Processing of Massive Sensor Data (HyspIRI IPM) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | Future space missions will enable unprecedented monitoring of the Earth's environment and will generate immense volumes of science data. Getting this data to ground communications stations, through science processing, and delivered to end users is a tremendous challenge. On the ground, the spacecraft's orbit is projected, and automated mission-planning tools determine which onboard-processing mode the spacecraft should use. The orbit determines the type of terrain that the spacecraft would be overflying—land, ice, coast, or ocean, for instance. Each terrain mask implies a set of requested modes and priorities. | Future space missions will enable unprecedented monitoring of the Earth's environment and will generate immense volumes of science data. Getting this data to ground communications stations, through science processing, and delivered to end users is a tremendous challenge. On the ground, the spacecraft's orbit is projected, and automated mission-planning tools determine which onboard-processing mode the spacecraft should use. The orbit determines the type of terrain that the spacecraft would be overflying—land, ice, coast, or ocean, for instance. Each terrain mask implies a set of requested modes and priorities. | onboard-processing mode the spacecraft should use | onboard-processing mode the spacecraft should use | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-232 | Mars2020 Rover (Perseverance) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Agentic AI | Research, experiments, and engineering to empower future rovers with onboard autonomy; planning, scheduling & execution; path planning; onboard science; image processing; terrain classification; fault diagnosis; and location estimation. | Meta Search: Because the onboard scheduler will be invoked many times in a given sol (Martian Day) with a range of possible contexts (due to execution variations), its non backtracking nature leaves its vulnerable to brittleness. In order to mitigate this potential brittleness, the Copilot systems perform a monte carlo based stochastic analysis to set meta parameters of the scheduler - primarily activity priority but also potentially preferred time and temporal constraints. | Mission-priority-based recommendations for scheduling Mars2020 Rover activities. | 07/01/2020 | b) Developed in-house | No | Mission-priority-based recommendations for scheduling Mars2020 Rover activities. | terrain input | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-234 | MLNav (Machine Learning Navigation) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Low-speed engagements with terrain features on Mars; part of core mission parameters. Mars Rover cannot harm humans or impact rights. | Classical/Predictive Machine Learning | Accelerates path planning of rovers and other types of vehicles through ML-based heuristics, while guaranteeing safety through conventional, model-based collision checking. | Accelerates path planning of rovers and other types of vehicles through ML-based heuristics, while guaranteeing safety through conventional, model-based collision checking. | Path planning recommendations for Mars2020 Rover | 07/01/2020 | b) Developed in-house | No | Path planning recommendations for Mars2020 Rover | Real terrain data from Mars on ENav simulator | No | Yes | ||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-235 | Neural network accelerated radiative transfer modeling | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Neural network accelerated radiative transfer modeling is intended to enhance efforts in the Earth Science domain. | Specifically, JPL constructed a flexible radiative transfer model (RTM) that combines physics-based models and artificial neural networks with the intent of providing fast radiative transfer modeling for global imaging spectroscopy missions, as well as large-scale airborne campaigns (ABoVE, Western Diversity Time Series, FIREX-AQ, etc.) | fast radiative transfer modeling | fast radiative transfer modeling | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-237 | Perseverance Rover on Mars - Terrain Relative Navigation | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | 3D machine vision via dual cameras to inform convolutional neural networks for rover navigation path planning. | 3D machine vision via dual cameras to inform convolutional neural networks for rover navigation path planning. | Real-time terrain-relative navigation recommendations. | b) Developed in-house | No | Real-time terrain-relative navigation recommendations. | camera input | No | Yes | ||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-240 | Providing visualization tools and streamlining the detection and tracking of wildfire-induced smoke plumes during the Fire Influence on Regional to Global Environments and Air Quality (FIREX-AQ) mission | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Providing visualization tools and streamlining the detection and tracking of wildfire-induced smoke plumes during the Fire Influence on Regional to Global Environments and Air Quality (FIREX-AQ) mission is intended to enhance efforts in the Earth Science domain. | Specifically by providing a hybrid unsupervised/supervised data processing pipeline for data fusion and wildfire/smoke identification with unique classification products from multiple instruments for further structural understanding smoke/fire dynamics. | detection and tracking of wildfire-induced smoke plumes | detection and tracking of wildfire-induced smoke plumes | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-241 | SCOTI (Scientific Captioning of Terrain Images) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | SCOTI (Scientific Captioning of Terrain Images) automatically generates natural language explanations of geological images taken by rovers. It uses "show-attend-tell" model consisting of CNN (Convolutional Neural Network) and LSTM (Long Short Term Memory), trained by scientist-genarated labels on MSL images. | SCOTI provides onboard data summarization thar would help the ground operation to selectively downlink high priority data under data bandwidth constraint. | natural language explanations of geological images taken by rovers | natural language explanations of geological images taken by rovers | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-244 | SPOC (Soil Property and Object Classification) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Using a convolutional neural network (CNN), SPOC (Soil Property and Object Classification) takes rover images and classifies the terrain type (e.g., sand, soil) from visual appearance. | This ability enables rover to drive more safely. | Terrain image classifications such as soil, sand, rock, etc. | 07/01/2020 | b) Developed in-house | No | Terrain image classifications such as soil, sand, rock, etc. | It is trained by labeled images from MER (Mars Exploration Rover), MSL (Mars Science Laboratory), and Mars 2020 rovers, annotated by tens of thousands of citizen scientsts through the AI4Mars project. SPOC deployed on MSL's ground operation system and onboard test on M2020 is being considered. SPOC is one of many inputs to navigation. | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-247 | Trusted and exPlainable Artificial Intelligence for Saving Lives (TruePAL) Technology for First Responder Safety | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Fatalities caused by emergency vehicle collisions are 4.8 times higher for emergency responders than the national average. The Trusted and exPlainable Artificial Intelligence for Saving Lives (TruePAL) project was funded by DOT to study the AI technology for saving first responders and roadside crews lives in and around active traffic. The TruePAL, an AI assistant, provides real-time warning of risks by analyzing the environment and traffic patterns to generate timely warning to drivers and roadside crews to avoid crashes. A deep neural network (DNN) is developed to detect and track vehicles, pedestrians, traffics signs, etc., and a Non-Axiomatic Reasoning System (NARS) to analyze the risk and provide prioritized warning messages. A mobile app with AI interface is developed to perform verbal communication with the first responders. The TruePAL AI assistant demonstrated real-time crash warning and human factor design capabilities using a CARLA simulator. The TruePAL project has developed cutting-edge AI tools: a Human-machine AI interface has potential to save lives for first responders and roadside crews. | Fatalities caused by emergency vehicle collisions are 4.8 times higher for emergency responders than the national average. The Trusted and exPlainable Artificial Intelligence for Saving Lives (TruePAL) project was funded by DOT to study the AI technology for saving first responders and roadside crews lives in and around active traffic. The TruePAL, an AI assistant, provides real-time warning of risks by analyzing the environment and traffic patterns to generate timely warning to drivers and roadside crews to avoid crashes. A deep neural network (DNN) is developed to detect and track vehicles, pedestrians, traffics signs, etc., and a Non-Axiomatic Reasoning System (NARS) to analyze the risk and provide prioritized warning messages. A mobile app with AI interface is developed to perform verbal communication with the first responders. The TruePAL AI assistant demonstrated real-time crash warning and human factor design capabilities using a CARLA simulator. The TruePAL project has developed cutting-edge AI tools: a Human-machine AI interface has potential to save lives for first responders and roadside crews. | timely warning to drivers and roadside crews to avoid crashes | timely warning to drivers and roadside crews to avoid crashes | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-249 | Volcano SensorWeb | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| National Aeronautics And Space Administration | JSC: Johnson Space Center | NASA-256 | Classification of features in Crew Earth Observations imagery | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Feature classification in Crew Earth Observations imagery of the following classes: Earth limb, lightning, the Moon, aurora, cities (64 cities, day/night), ISS equipment. For deployment on the Gateway to Astronaut Photography | Feature classification in Crew Earth Observations imagery of the following classes: Earth limb, lightning, the Moon, aurora, cities (64 cities, day/night), ISS equipment. For deployment on the Gateway to Astronaut Photography | Feature classification | Feature classification | |||||||||||||||||||||
| National Aeronautics And Space Administration | JSC: Johnson Space Center | NASA-258 | Cloud masks from Crew Earth Observations imagery | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Generation of cloud masks via segmentation in Crew Earth Observations (Crew Earth Observations) images. For deployment on the Gateway to Astronaut Photography to supplement search tags. | Generation of cloud masks via segmentation in Crew Earth Observations (Crew Earth Observations) images. For deployment on the Gateway to Astronaut Photography to supplement search tags. | cloud masks via segmentation | cloud masks via segmentation | |||||||||||||||||||||
| National Aeronautics And Space Administration | JSC: Johnson Space Center | NASA-259 | Comment Analytics Dashboard | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Developed using Python and R, the Comment Analysis Dashboard is designed to visualize sentiment scored data provided by SF to better understand how crew members feel on a day-to-day basis, allowing for improved decision making when determining how to meet crew member needs. The data visualization comes with a data handling notebook designed to give SF insights into the analytics portion of the application, i.e. using comparative analysis to select a sentiment scoring model. | Developed using Python and R, the Comment Analysis Dashboard is designed to visualize sentiment scored data provided by SF to better understand how crew members feel on a day-to-day basis, allowing for improved decision making when determining how to meet crew member needs. The data visualization comes with a data handling notebook designed to give SF insights into the analytics portion of the application, i.e. using comparative analysis to select a sentiment scoring model. | visualize sentiment scored data | visualize sentiment scored data | |||||||||||||||||||||
| National Aeronautics And Space Administration | JSC: Johnson Space Center | NASA-260 | Crew Earth Observations automated georeferencing/geolocation | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Automated georeferencing of Crew Earth Observation images. For deployment on the Gateway to Astronaut Photography. | Automated georeferencing of Crew Earth Observation images. For deployment on the Gateway to Astronaut Photography. | This would inject images onto maps / virtual globe automatically. | This would inject images onto maps / virtual globe automatically. | |||||||||||||||||||||
| National Aeronautics And Space Administration | JSC: Johnson Space Center | NASA-277 | Machine Learning for RFID (Radio Frequency Identification) tag localization to support logistics | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Currently have two production machine learning approaches to tackle RFID (Radio Frequency Identification) tag localization in the highly reflective environment imposed by the International Space Station. First use case, REALMRFC, is a random forest classifier model with feature engineering performed by an RFID localization expert. The second use case is P-RFIDNet, a neural network with a ResNet50 backbone. In continued work, we have leveraged transfer learning to show how P-RFIDNet can be generalized to new RFID environments with limited training data. We benchmark P-RFIDNet and REALMRFC using data from the RFID Enabled Autonomous Logistics Management (REALM) and using truth derived from the Inventory Management System (IMS). | Currently have two production machine learning approaches to tackle RFID (Radio Frequency Identification) tag localization in the highly reflective environment imposed by the International Space Station. First use case, REALMRFC, is a random forest classifier model with feature engineering performed by an RFID localization expert. The second use case is P-RFIDNet, a neural network with a ResNet50 backbone. In continued work, we have leveraged transfer learning to show how P-RFIDNet can be generalized to new RFID environments with limited training data. We benchmark P-RFIDNet and REALMRFC using data from the RFID Enabled Autonomous Logistics Management (REALM) and using truth derived from the Inventory Management System (IMS). | RFID (Radio Frequency Identification) tag localization | RFID (Radio Frequency Identification) tag localization | |||||||||||||||||||||
| National Aeronautics And Space Administration | JSC: Johnson Space Center | NASA-279 | Noise suppression in Human Spaceflight audio systems | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Noise. Is. Aggravating. Especially on voice and video calls. It’s certainly an unwelcome party to most conversations, but it has always been around. | Until recently Machine learning (ML) and artificial intelligence (AI) have led to highly elective noise-suppression and cancellation techniques, many of which are now being used at various levels of technology stacks: in hardware, middleware, apps, and more recently, SDKs (Software Development Kits) that are now available to a broad range of developers. The industries developing these new techniques and technologies have been mostly focused on standard use cases like call centers, telephony, simple voice, and video calls that power daily business meetings, calls with friends, family, and so forth. Recently, calls are no longer just calls—increasingly, they are becoming online rooms, virtual pods, or meeting spaces where people interact in new ways or take part in activities together. | noise suppression | noise suppression | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-293 | Adaptive Neural Network Molecular Dynamics | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | ML approach, based on artificial neural networks (ANNs) is used in atomistic simulations to efficiently reproduce the very complex energy landscape resulting from the atomic interactions in materials with the accuracy of the more expensive quantum mechanics-based calculations. | ML approach, based on artificial neural networks (ANNs) is used in atomistic simulations to efficiently reproduce the very complex energy landscape resulting from the atomic interactions in materials with the accuracy of the more expensive quantum mechanics-based calculations. | The ANN gives optimized parameters for a predefined empirical function, known as bond-order-potential (BOP). Thus parameterized BOP function is then used to calculate the energy of an atom. | The ANN gives optimized parameters for a predefined empirical function, known as bond-order-potential (BOP). Thus parameterized BOP function is then used to calculate the energy of an atom. | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-294 | Aerosol and Cloud Identification in SAGE III/ISS Aerosol Extinction Profiles | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This work uses machine learning techniques to classify aerosols in SAGE (Stratospheric Aerosol and Gas Experiment) III / ISS (International Space Station) Aerosol Extinction Profile data. | This work uses machine learning techniques to classify aerosols in SAGE (Stratospheric Aerosol and Gas Experiment) III / ISS (International Space Station) Aerosol Extinction Profile data. | classify aerosols | classify aerosols | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-296 | AMP: An Automated Metadata Pipeline | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | In this work, we combine ontologies and machine learning to auto-generate robust, semantically consistent, variable-level metadata records for large NASA satellite collections, and AMPed metadata supports improved data discovery and the AMP (automated data pipeline) provides API (Application Program Interface) access to subsetted data on demand. | In this work, we combine ontologies and machine learning to auto-generate robust, semantically consistent, variable-level metadata records for large NASA satellite collections, and AMPed metadata supports improved data discovery and the AMP (automated data pipeline) provides API (Application Program Interface) access to subsetted data on demand. | metadata records | metadata records | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-297 | Application of High-Dimensional Fuzzy K-mean Cluster Analysis to CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) / CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observatory) Version 4.1 Feature Classifications | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This project uses Fuzzy K-means clustering (unsupervised learning) to validate the cloud-aerosol discrimination algorithm used in the publibly-distributed CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) / CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observatory) data products. | This project uses Fuzzy K-means clustering (unsupervised learning) to validate the cloud-aerosol discrimination algorithm used in the publibly-distributed CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) / CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observatory) data products. | Applies convolutional neural networks (supervised learning) to automatically identify the presence of different aerosol species (e.g., dust and smoke) in CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observatory) lidar backscatter measurements. | Applies convolutional neural networks (supervised learning) to automatically identify the presence of different aerosol species (e.g., dust and smoke) in CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observatory) lidar backscatter measurements. | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-300 | CERES FluxByCldTyp Data Product Narrowband-to-Broadband Algorithm Improvement Through Deep Neural Network (DNN) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Apply DNN (Deep Neural Networks) to the CERES (Clouds and the Earth's Radiant Energy System) Flux by Cloud Type data narrowband-to-broadband algorithms to improve shortwave and longwave fluxes both for clear sky and cloudy sky conditions. | Apply DNN (Deep Neural Networks) to the CERES (Clouds and the Earth's Radiant Energy System) Flux by Cloud Type data narrowband-to-broadband algorithms to improve shortwave and longwave fluxes both for clear sky and cloudy sky conditions. | improve shortwave and longwave fluxes both for clear sky and cloudy sky conditions | improve shortwave and longwave fluxes both for clear sky and cloudy sky conditions | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-302 | Cloud Detection Neural Network | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Detection of clouds below and above aircraft to reduce cloud contamination and improve NASA passive remote sensing of aerosols, oceans and lands from aircraft. | Detection of clouds below and above aircraft to reduce cloud contamination and improve NASA passive remote sensing of aerosols, oceans and lands from aircraft. | passive remote sensing of aerosols, oceans and lands from aircraft. | passive remote sensing of aerosols, oceans and lands from aircraft. | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-303 | Combining Satellite and Ground-based Observations to Improve Cloud Ceiling Observations over CONUS for Aviation Weather | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A k-nearest neighbors method is applied to use satellite observations to extend sparse ceilometer observations at surface stations to high spatiotemporal resolutions over CONUS (Continental United States) | A k-nearest neighbors method is applied to use satellite observations to extend sparse ceilometer observations at surface stations to high spatiotemporal resolutions over CONUS (Continental United States) | high spatiotemporal resolution | high spatiotemporal resolution | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-306 | Detection of multi-layered clouds from multispectral MODIS/VIIRS data using an artificial neural network trained with CALIPSO-CloudSat data. | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Detection of multi-layered clouds from multispectral MODIS (Moderate Resolution Imaging Spectroradiometer) / VIIRS (Visible Infrared Imaging Radiometer Suite) data using an artificial neural network trained with CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observatory) - CloudSat data. | Detection of multi-layered clouds from multispectral MODIS (Moderate Resolution Imaging Spectroradiometer) / VIIRS (Visible Infrared Imaging Radiometer Suite) data using an artificial neural network trained with CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observatory) - CloudSat data. | multi-layered cloud detection | multi-layered cloud detection | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-307 | Developing Quantum Reservoir Computing Hardware and Software for Deep Learning | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This study uses integrated photonic circuits, mode-locked lasers and quantum optics to perform fast computing that use that to train the recurrent neural network deep learning through the so-called “reservoir computing” technique. | This study uses integrated photonic circuits, mode-locked lasers and quantum optics to perform fast computing that use that to train the recurrent neural network deep learning through the so-called “reservoir computing” technique. | This study uses integrated photonic circuits, mode-locked lasers and quantum optics to perform fast computing that use that to train the recurrent neural network deep learning through the so-called “reservoir computing” technique. | This study uses integrated photonic circuits, mode-locked lasers and quantum optics to perform fast computing that use that to train the recurrent neural network deep learning through the so-called “reservoir computing” technique. | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-313 | Estimation of Multilayered Cloud Properties from Geostationary Satellite Imager Data | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | A deep neural network is developed and applied to geostationary satellite imagery data to identify and retrieve characteristics of multi-layered clouds for potential use in weather and climate applications. | A deep neural network is developed and applied to geostationary satellite imagery data to identify and retrieve characteristics of multi-layered clouds for potential use in weather and climate applications. | identify and retrieve characteristics of multi-layered cloud | identify and retrieve characteristics of multi-layered cloud | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-314 | Estimation of Multilayered Cloud Properties from Low-Earth-Orbit Satellite Imager Data | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | An artificial neural network is developed and applied to satellite imagery data to identify and retrieve characteristics of multi-layered clouds for potential use in weather and climate applications. | An artificial neural network is developed and applied to satellite imagery data to identify and retrieve characteristics of multi-layered clouds for potential use in weather and climate applications. | identify and retrieve characteristics of multi-layered cloud | identify and retrieve characteristics of multi-layered cloud | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-317 | Geostationary Satellite Sounder Pathfinder Project for 4-D Atmospheric Thermodynamics and Winds | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A k-nearest neighbors technique, coupled with an advanced data assimilation system are applied to hyperspectral infrared and geostationary satellite imager data to provide high spatiotemporal resolution analyses of atmospheric thermodynamics, winds and clouds for severe weather forecasting, cloud process studies and other meteorological applications. | A k-nearest neighbors technique, coupled with an advanced data assimilation system are applied to hyperspectral infrared and geostationary satellite imagery data to provide high spatiotemporal resolution analyses of atmospheric thermodynamics, winds and clouds for severe weather forecasting, cloud process studies and other meteorological applications. | high spatiotemporal resolution analyses of atmospheric thermodynamics, winds and clouds for severe weather forecasting, cloud process studies and other meteorological applications. | high spatiotemporal resolution analyses of atmospheric thermodynamics, winds and clouds for severe weather forecasting, cloud process studies and other meteorological applications. | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-319 | Identifying Aerosol Subtypes from CALIPSO Lidar Profiles Using Deep Machine Learning | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Applies convolutional neural networks (supervised learning) to automatically identify the presence of different aerosol species (e.g., dust and smoke) in CALIPSO lidar backscatter measurements. | Applies convolutional neural networks (supervised learning) to automatically identify the presence of different aerosol species (e.g., dust and smoke) in CALIPSO lidar backscatter measurements. | presence of different aerosol species (e.g., dust and smoke) | presence of different aerosol species (e.g., dust and smoke) | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-320 | Identifying flaws in manufacturing of composites | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Inspection for manufacturing flaws makes up 40% of fabrication time so if that time could be reduced it would have a significant impact on overall manufacturing time. That doesn’t even take into account the benefits of finding flaws that were previously undetectable by the current technique of visual inspection. | Inspection for manufacturing flaws makes up 40% of fabrication time so if that time could be reduced it would have a significant impact on overall manufacturing time. That doesn’t even take into account the benefits of finding flaws that were previously undetectable by the current technique of visual inspection. | A U-Net was trained to ID flaws in those images. | A U-Net was trained to ID flaws in those images. | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-321 | Improvements in Global Nighttime Satellite Cloud Analyses for Weather and Climate | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A k-nearest neighbors method is applied to overcome nighttime infrared satellite imagery sensitivity limits that dramatically improves nighttime cloud analyses and their consistency with daytime analyses. | A k-nearest neighbors method is applied to overcome nighttime infrared satellite imagery sensitivity limits that dramatically improves nighttime cloud analyses and their consistency with daytime analyses. | nighttime cloud analyses and their consistency with daytime analyses | nighttime cloud analyses and their consistency with daytime analyses | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-322 | Improving CERES Low-Latency Surface Radiation Fluxes with Machine/Deep Learning | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | In conjunction with sophisticated radiative transfer simulations, the CERES (Clouds and the Earth's Radiant Energy System) team is using machine and deep learning methods to improve near real-time surface radiative fluxes for clean energy, infrastructure energy use, and agricultural applications. | In conjunction with sophisticated radiative transfer simulations, the CERES (Clouds and the Earth's Radiant Energy System) team is using machine and deep learning methods to improve near real-time surface radiative fluxes for clean energy, infrastructure energy use, and agricultural applications. | improve near real-time surface radiative fluxes | improve near real-time surface radiative fluxes | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-325 | Intelligent Contingency Management | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | Adapt and train AI algorithms to contribute to an autonomous vehicle mission manager for Advanced Air Mobility (Cargo, Air Taxis). At a high level, the AI must recognize contingency flight conditions and react appropriately to return the aircraft to safe flight status. The project has three main objectives: 1. Explore machine learning for intelligent contingency management, with a focus on assessing/projecting vehicle capability and maintaining nominal performance via reinforcement learning. 2. Develop vehicle intelligent contingency management system architecture at a functional level and validate against a specific Unmanned Air Mobility (UAM)-class vehicle. 3. Incorporate (1) and (2) into an evolving toolset for an autonomous vehicle. | Adapt and train AI algorithms to contribute to an autonomous vehicle mission manager for Advanced Air Mobility (Cargo, Air Taxis). At a high level, the AI must recognize contingency flight conditions and react appropriately to return the aircraft to safe flight status. The project has three main objectives: 1. Explore machine learning for intelligent contingency management, with a focus on assessing/projecting vehicle capability and maintaining nominal performance via reinforcement learning. 2. Develop vehicle intelligent contingency management system architecture at a functional level and validate against a specific Unmanned Air Mobility (UAM)-class vehicle. 3. Incorporate (1) and (2) into an evolving toolset for an autonomous vehicle. | Outputs include recognition of off-nominal conditions (contingencies) and mission executon strategy adjustments. | Outputs include recognition of off-nominal conditions (contingencies) and mission executon strategy adjustments. | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-327 | Lessons Learned Bot (LLB) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | In near real-time, the Lessons Learned Bot, or LLB, brings lessons learned (LL) documents to users through a Microsoft Excel add-in application locally installed to search for LL content relevant to the text within the selected Excel cell. The application will encompass a corpus of documents, a trained Machine Learning (ML) model, built-in ML tools to train user’s documents, and an easy-to-use user interface to allow for the streamlined discovery of LL content. | Today, NASA’s LL are online and searchable via keywords. Nevertheless, users often face a challenge to find lessons relevant to their issues. Applying the advancement in Natural Language Processing (NLP) ML algorithm, the LLB can find and rank LL records relevant to text in the user’s selected Excel cells, containing just a few words or entire paragraphs of text. Results are displayed to the user in their existing Excel workflow. | The LLB’s installation package comes with a pre-trained NASA LL dataset and a NASA Scientific and Technical Information (STI) dataset, as well as on-demand training tools allowing the user to apply the LLB search algorithm to their own discipline specific datasets. Additionally, we also have an API version of this software that can be called from any application within the Agency firewall. | The LLB’s installation package comes with a pre-trained NASA LL dataset and a NASA Scientific and Technical Information (STI) dataset, as well as on-demand training tools allowing the user to apply the LLB search algorithm to their own discipline specific datasets. Additionally, we also have an API version of this software that can be called from any application within the Agency firewall. | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-329 | Machine Learning for Advancing Risk Precursor Identification Tools in Commercial Airline Terminal Area Operations (out of D318) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This work uses a variety of machine learning techniques to recommend commercial terminal area aviation failure modes in addition to unstable vertical approach, evaluate these for machine learning feasibility, recommend the best failure mode and machine learning algorithm pairings, and incorporate these into the aviation risk precursor detection prototype back-end and user interface. | This work uses a variety of machine learning techniques to recommend commercial terminal area aviation failure modes in addition to unstable vertical approach, evaluate these for machine learning feasibility, recommend the best failure mode and machine learning algorithm pairings, and incorporate these into the aviation risk precursor detection prototype back-end and user interface. | recommendations | recommendations | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-331 | Mitigating Problematic GOES-17 Infrared Radiances at Night for Weather and Climate Applications | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A k-nearest neighbors method is applied to successfully reconstruct bad infrared radiance measurements at night caused by an instrument cooling problem on the GOES-17 (Geostationary Operational Environmental Satellite) Advanced Baseline Imager. | A k-nearest neighbors method is applied to successfully reconstruct bad infrared radiance measurements at night caused by an instrument cooling problem on the GOES-17 (Geostationary Operational Environmental Satellite) Advanced Baseline Imager. | reconstruct bad infrared radiance measurements | reconstruct bad infrared radiance measurements | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-334 | NASA OCIO STI Concept Tagging Service | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | An API (application program interface) for exposing topic models created with the STI (Scientific & Technical Information) concept training repository. | An API (application program interface) for exposing topic models created with the STI (Scientific & Technical Information) concept training repository. | API standards | b) Developed in-house | No | API standards | STI (Scientific & Technical Information) concept training repository | No | No | ||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-335 | New Skin Temperature Analyses for Satllite Cloud Retrievals | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A deep neural network is applied to model analyses and satellite observations to improve skin temperature estimates needed for satellite remote sensing of cloud properties. | A deep neural network is applied to model analyses and satellite observations to improve skin temperature estimates needed for satellite remote sensing of cloud properties. | improve skin temperature estimates | improve skin temperature estimates | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-336 | Nocturnal Opaque Ice Cloud Optical Depth Analyses from MODIS Multispectral Infrared Radiances | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | An artificial neural network is developed and applied to improve estimates of opaque ice cloud optical depths at night using MODIS (Moderate Resolution Imaging Spectroradiometer) data. | An artificial neural network is developed and applied to improve estimates of opaque ice cloud optical depths at night using MODIS (Moderate Resolution Imaging Spectroradiometer) data. | improve estimates of opaque ice cloud optical depths at night | improve estimates of opaque ice cloud optical depths at night | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-337 | PACE-MAPP | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The MAPP (modeling, analysis and prediction program) remote sensing retrieval algorithm for the upcoming PACE (plankton, aerosol, cloud, ocean ecosystem) satellite mission uses neural networks to drastically improve the computational speed and capability of determining aerosol, cloud, and ocean properties from space-based measurements of the Earth. | This framework can be used to train neural networks for a variety of satellite and airborne sensors, including passive polarimeter and hyperspectral sensors and active lidar instruments, to better understand the Earth's dynamic atmosphere, ocean and land systems. | determining aerosol, cloud, and ocean properties | determining aerosol, cloud, and ocean properties | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-340 | Polar Nighttime Cloud Detection Using an Artificial Neural Network | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | An artificial neural network is applied that addresses the difficult problem of accurately detecting clouds at night over snow- and ice-covered areas in polar regions from satellite imager data. | An artificial neural network is applied that addresses the difficult problem of accurately detecting clouds at night over snow- and ice-covered areas in polar regions from satellite imager data. | detecting clouds at night over snow- and ice-covered areas | detecting clouds at night over snow- and ice-covered areas | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-344 | Probabilistic calibration framework for finite element thermal process modeling of metallic additive manufacturing. Application to promote certification/qualification of load critical aerospace flight parts. | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Application of active learning paradigm to efficiently develop Gaussian Process Regression surrogate where run time for the target finite element thermal process model is significant. Fully trained surrogate is then used by monte carlo process to develop probabilistic distribution of finite element model calibration variables which result in finite element model predictions that are in line with input empirical measurement distributions. | In short, this technique enables the robust and efficient calibration of a model simulating the manufacture of metallic parts by additive manufacturing. This is important because pure simulation of these processes is not possible without calibration due to large uncertainties in fundamental physical and material properties currently available due to the nature of the manufacturing process. The inherent variation in mechanical properties of parts produced using metallic additive manufacturing result in significant challenges to certification/qualification for flight. The goal is to alleviate these challenges with better probabilistic quantification of the fundamentals resulting in this inherent variation, thus promoting further adoption of this fledgling manufacturing technique by industry within the aeronautics industry. | probabilistic distribution of finite element model calibration variables | probabilistic distribution of finite element model calibration variables | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-347 | Retrieving Aerosol Optical Depth and High Spatial Resolution Ocean Surface Wind Speed From CALIPSO: A Neural Network Approach | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This study uses ocean surface wind speed derived from microwave radiometer to train spacebased lidar measurements to measure both wind speed and aerosol optical depth using neural networks. | This study uses ocean surface wind speed derived from microwave radiometer to train spacebased lidar measurements to measure both wind speed and aerosol optical depth using neural networks. | wind speed and aerosol optical depth | wind speed and aerosol optical depth | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-349 | Severe Storm Prediction via Overshooting Cloud Top and Above Anvil Cirrus Plume Image Recognition from Satellite Imager Data | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Overshooting Cloud Tops (OTs) Above-Anvil Cirrus Plumes (AACPs) are indicators of especially intense thunderstorm updrafts that are precursors to severe weather including damaging winds, hail, tornadoes, lightning, flooding rainfall, and aviation weather hazards. This project involves training machine learning image recognition techniques to identify OTs AACPs in multispectral satellite imagery (visible, infrared, and lightning imaging) for storm warning. Such warnings are especially valuable in regions without near real time weather radar networks or during radar outages. Applying these methods to long-term satellite data records enables the community to assess severe storm risk, which is needed by the insurance and reinsurance industries | A NASA open source software tool has been made publicly available to enable researchers and forecasters to apply the tool in near real time and for archived satellite imagery. This is the first tool of its kind that can be used for satellite-based severe storm detection https://github.com/nasa/svrstormsig | Outputs include the likelihood of OT and AACPs at the satellite pixel scale (e.g. 2 km pixel size), and metrics quantifying storm intensity based on cloud top temperature patterns. | Outputs include the likelihood of OT and AACPs at the satellite pixel scale (e.g. 2 km pixel size), and metrics quantifying storm intensity based on cloud top temperature patterns. | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-354 | Using Natural Language Processing to Help Automate the Standardization of PI Variable Names from ICARTT Files | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Variables names of data collected from suborbital missions need mapping to the corresponding standard variable names for consistency in labelling the available science data, enhancing data discovery for further research; ASDC (Atmospheric Science Data Center) is developing an AI/ML based workflow for the assignment of these standard variable names based on Natural Language Processing (NLP). | Variables names of data collected from suborbital missions need mapping to the corresponding standard variable names for consistency in labelling the available science data, enhancing data discovery for further research; ASDC (Atmospheric Science Data Center) is developing an AI/ML based workflow for the assignment of these standard variable names based on Natural Language Processing (NLP). | mapping to the corresponding standard variable names | mapping to the corresponding standard variable names | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-359 | Airplane detection | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Deep learning-based airplane detection from high-resolution satellite imagery | airplane detection from high-resolution satellite imagery | airplane detection | airplane detection | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-360 | Automated Dust detection in satellite imagery | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Application of machine learning to the problem of night-time dust detection with a simple random forest (RF) model using Geostationary Operational Environmental Satellite-16 (GOES-16) Advanced Baseline Imager (ABI) infrared imagery to identify dust in satellite imagery and output the probability dust is present | Application of machine learning to the problem of night-time dust detection with a simple random forest (RF) model using Geostationary Operational Environmental Satellite-16 (GOES-16) Advanced Baseline Imager (ABI) infrared imagery to identify dust in satellite imagery and output the probability dust is present | probability dust is present | probability dust is present | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-362 | Deep Learning-based Hurricane Intensity Estimator | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | A web-based situational awareness tool that uses deep learning on satellite images to objectively estimate windspeed of a hurricane | A web-based situational awareness tool that uses deep learning on satellite images to objectively estimate windspeed of a hurricane | estimate windspeed of a hurricane | estimate windspeed of a hurricane | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-365 | GCMD Keyword Recommender (GKR) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Natural Language Processing-based science keyword suggestion tool | Natural Language Processing-based science keyword suggestion tool | keyword suggestions | 01/01/2023 | b) Developed in-house | Yes | keyword suggestions | comparison of suggestions to thoroughly populated and manually curated metadata, but ultimately recommendations are either accepted or rejected by an experienced curator. | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-370 | Marine debris detection using deep learning and high resolution satellite images | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Floating marine debris is a global pollution problem which threatens marine and human life and leads to the loss of biodiversity. Large swaths of marine debris are also navigational hazards to vessels. This project uses deep learning to detect floating marine debris in satellite imagery. | Floating marine debris is a global pollution problem which threatens marine and human life and leads to the loss of biodiversity. Large swaths of marine debris are also navigational hazards to vessels. This project uses deep learning to detect floating marine debris in satellite imagery. | detect floating marine debris | detect floating marine debris | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-372 | Near-Reality Slosh Model using AIML | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Liquid slosh for aircraft and spacecraft is a highly challenging topic; traditional slosh modeling & simulation approaches are very computationally expensive. This project seeks to use AIML techniques such as neural networks to create ML surrogate models to accurately approximate traditional techniques. | Time saving for slosh model generation | Outputs will include AIML-powered surrogate slosh models, to include accuracy and error measures. | Outputs will include AIML-powered surrogate slosh models, to include accuracy and error measures. | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-374 | Phenomena Detection Portal | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | A system that detects Earth science phenomena from the image archives using deep learning to provide efficient search and discovery of Earth science events | efficient search and discovery of Earth science events | efficient search and discovery of Earth science events | efficient search and discovery of Earth science events | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-375 | Pixel-Level Smoke Detection Model with Deep Learning | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Automated, deep learning based detection model capable of identifying smoke plumes from shortwave reflectance for the Geostationary Operational Environmental Satellite R series of satellites. | Automated, deep learning based detection model capable of identifying smoke plumes from shortwave reflectance for the Geostationary Operational Environmental Satellite R series of satellites. | identifying smoke plumes | identifying smoke plumes | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-376 | Predicting streamflow with deep learning | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Uses a long short-term memory model to predict streamflow at USGS gauges sites | Uses a long short-term memory model to predict streamflow at USGS gauges sites | predictions of streamflow at USGS gauges sites | predictions of streamflow at USGS gauges sites | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-380 | Ship detection | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Deep learning-based ship detection from high-resolution satellite imagery | Deep learning-based ship detection from high-resolution satellite imagery | ship detection | ship detection | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-381 | Sinatra Software for Anomaly Detection | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Flexible software framework that analyzes various input sources of data and detects anomalies. Automates neural network model creation and tuning. | Makes it easier for ML beginners to get started. | anomalies | anomalies | |||||||||||||||||||||
| National Aeronautics And Space Administration | SSC: Stennis Space Center | NASA-395 | INtelligent StennIs Gas House Technology (INSIGHT) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | INSIGHT is an operational system that performs autonomous Integrated System Health Management (ISHM) and autonomous operations of the Nitrogen System of the High Pressure Gas Facility at NASA Stennis Space Center. | It is an application implemented using the NASA Platform for Autonomous Systems (NPAS) described in this document as AI Use Case Name: NASA Platform for Autonomous Systems (NPAS). | autonomous operations | autonomous operations | |||||||||||||||||||||
| National Aeronautics And Space Administration | SSC: Stennis Space Center | NASA-396 | NASA Platform for Autonomous Systems (NPAS) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | The NASA Platform for Autonomous Systems (NPAS) enables implementation of "thinking" systems, and in particular of "thinking" autonomous systems. A broad range of systems can be made to display "thinking" autonomous behavior; including fluid, mechanical, electrical, networks, and computer. Additional types of systems can be easily included. | NPAS systems/applications incorporate autonomy strategies to deal with off-nominal cases, and strategies for fault management. NPAS includes infrastructure for autonomous operations (task definition, planning, scheduling, and execution) embodying specific concepts of operations. NPAS supports implementation of hierarchical distributed autonomous systems and operations. | AI behavior, "thinking," is grounded in a comprehensive representation of the system (comparable to SysML model descriptions that include health management and autonomy behaviors as well as schematic level descriptions), behavior/function models (physics-based, heuristic, rule-based - probabilistic and neural-network models can also be incorporated), Failure Modes and Effects Analysis (FMEA) with generic/re-usable libraries. | AI behavior, "thinking," is grounded in a comprehensive representation of the system (comparable to SysML model descriptions that include health management and autonomy behaviors as well as schematic level descriptions), behavior/function models (physics-based, heuristic, rule-based - probabilistic and neural-network models can also be incorporated), Failure Modes and Effects Analysis (FMEA) with generic/re-usable libraries. | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-398 | Autonomous WAiting Room Evaluation (AWARE) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This AI processes images of people waiting in the Langley Badge & Pass office. The people are counted and their images immediately deleted to avoid privacy issues. | Computer Vision | Using an existing security camera and YOLO Machine Learning model to detect and count number of people waiting for service at Langley's Badge & Pass Office. When a predetermined threshold of people is exceeded, automated texts and emails are sent to request additional help at the service counters. | Spreads load on LaRC Badge & Pass Office (via website telling employees how busy) and adds BPO support by texting staff when waiting room conut threshold exceeded. | Detection, classification and enumaration of people in BPO waiting area | 01/01/2023 | b) Developed in-house | Yes | Detection, classification and enumaration of people in BPO waiting area | Open-source Common Objects in Context (COCO) dataset | No | Yes | ||||||||||||||
| National Aeronautics And Space Administration | GRC: Glenn Research Center | NASA-401 | Pre-trained microscopy image neural network encoders | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Convolutional Neural Network encoders were trained on over 100,000 microscopy images of materials. | When deployed in downstream microscopy tasks through transfer learning, encoders pre-trained on MicroNet outperform ImageNet encoders. These pre-trained MicroNet encoders have been successfully deployed for semantic segmentation, instance segmentation, and regression tasks. | Automatically segment microscopy features given limited annotated microscopy images. | b) Developed in-house | Yes | Automatically segment microscopy features given limited annotated microscopy images. | Annoted microscopy images of various materials (metals, composites, EBC/CMC, etc.) | No | No | ||||||||||||||||
| National Aeronautics And Space Administration | GRC: Glenn Research Center | NASA-402 | Surrogate Models for Efficient Multiscale Modeling of Composite Materials | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A custom neural network architecture containing graph convolutional network (GCN) and long short-term memory (LSTM) layers was trained as a computationally efficient surrogate for a physics-based composite modeling simulation. | Surrogate models are attractive because they can be evaluated many orders of magnitude faster than physics-based models and with a high degree of accuracy. Such models can be used for efficient multiscale modeling, design optimization, Monte Carlo methods, and optimal experimental design in ways that would be intractable with many physics-based models. | Predict constitutive behavior as a function of material properties and applied strain. | Predict constitutive behavior as a function of material properties and applied strain. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-404 | Federated Learning Using In-Space Data (FLUID ) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Enables neural net models to be trained using a combination of terrestrial data and space-borne data without the need to downlink or uplink data for consolidation and training in the normal manner. Instead, small components of the overall neural net model are trained in-situ with the data and transmitted for federation into a single neural net, thereby reducing data transmission demands and reducing overall latencies by several orders of magnitude. | Enables neural net models to be trained using a combination of terrestrial data and space-borne data without the need to downlink or uplink data for consolidation and training in the normal manner. Instead, small components of the overall neural net model are trained in-situ with the data and transmitted for federation into a single neural net, thereby reducing data transmission demands and reducing overall latencies by several orders of magnitude. | For example, a neural net model to monitor lunar habitat astronauts for signs of lung toxicity due to regolith inhalation could be largely trained on Earth using similar data (e.g. data collected for volcanic ash inhaltion) and then fine-tuned with data generated in-situ on the lunar surface, without the need to transmit that data back to Earth. The FLUID arhitecture has been fully tested using the Spaceborne Computer-2 on board the ISS; in February of 2024, the world's first neural net model trained with both terrestrial and in-space data was successfully trained. | For example, a neural net model to monitor lunar habitat astronauts for signs of lung toxicity due to regolith inhalation could be largely trained on Earth using similar data (e.g. data collected for volcanic ash inhaltion) and then fine-tuned with data generated in-situ on the lunar surface, without the need to transmit that data back to Earth. The FLUID arhitecture has been fully tested using the Spaceborne Computer-2 on board the ISS; in February of 2024, the world's first neural net model trained with both terrestrial and in-space data was successfully trained. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-405 | Mapping Digital Infrastructure (MADI) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The Mapping Digital Infrastructure (MADI) is a series of natural-language processing based tools that are designed to supplement the human-in-the-loop data exploration for CAS. | As CAS's Mapping and Synthesis teams work to identify transformational, wicked problems and potential solutions, MADI aims to supplement the information gathering and synthesis steps by clustering, theming, crawling, and summarizing publicly available data about emerging trends, needs, and capabilities. | CAS is also working to now validate the use of MADI in generating future scenarios and hypothetical trends that extend beyond the explicit semantic knowledge of the input sources. | CAS is also working to now validate the use of MADI in generating future scenarios and hypothetical trends that extend beyond the explicit semantic knowledge of the input sources. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-406 | Use of AI for UAV power train monitoring | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Use of PyTorch feedforward neural networks to monitor the health of electronic speed controllers and motors. | Use of PyTorch feedforward neural networks to monitor the health of electronic speed controllers and motors. | Use of PyTorch feedforward neural networks to monitor the health of electronic speed controllers and motors. | Use of PyTorch feedforward neural networks to monitor the health of electronic speed controllers and motors. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-410 | Nighttime Combustion Detection from NASA’s Black Marble | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This use case focuses on monitoring nighttime combustion over land from NASA’s Black Marble product and has demonstrated the detection of fires and gas flaring mapping by jointly using both thermal and light emission properties of these event. | The ML capability first demonstrates the generation of training samples using anomaly detection, tackling the challenge of training data generation in Earth Sciences. This is followed by classification of combustion and background across diverse geographic reasons and seasons and produces detections from a suite of baseline ML approaches including per-band outlier detections, fully-connected neural networks and Siamese Networks trained with triplet and contrastive loss. The detections are then jointly considered to create a high confidence ensemble detection layer. | daily detections, ensemble output. | daily detections, ensemble output. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-411 | Biological and Physical Sciences (BPS) RNA Sequencing Benchmark Training Dataset | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | RNA sequencing data from spaceflown and control mouse liver samples, sourced from NASA GeneLab and augmented with generative adversarial network to provide synthetic data points. | The implementation uses classification methods and hierarchical clustering to identify genes that are predictive of outcomes. | The implementation uses classification methods and hierarchical clustering to identify genes that are predictive of outcomes. | The implementation uses classification methods and hierarchical clustering to identify genes that are predictive of outcomes. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-414 | NIR spectroscopy-based analytical tool for immediate determination of chemical signature | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Goals: Develop AI/ML based tool that can identify materials composition and size distribution “on the fly” | Importance: NIR is well-suited for soft matter research in space because- 1. It doesn’t require any sample preparation 2. It doesn’t need gravity driven flow 3. Form factor can be of the order of a matchbox Challenges: There is no tool presently available that can perform spectral decomposition to achieve the goals | NIR (Near IR) spectroscopy is a spectroscopic method that uses the near-infrared region of the electromagnetic spectrum (from 780 nm to 2500 nm). Every material has their unique spectra. | NIR (Near IR) spectroscopy is a spectroscopic method that uses the near-infrared region of the electromagnetic spectrum (from 780 nm to 2500 nm). Every material has their unique spectra. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-415 | Improved Differential Dynamic Microscopy (DDM) tool for characterization of soft matter | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Goals: Develop AI/ML based tool that can improve processability of images and derive the outcome | Importance: DDM is well-suited for soft matter research in space because- 1. It only needs regular microscopy system 2. Can be extremely modular and can be fitted in form factor as small as cube sat Challenges: The present methodology for DDM analysis require long time (~20- 30 minutes) for processing images & decomposition of signal is a challenge | DDM is an optical microscopy method that can use high speed imaging with data analysis to analyze soft active media (e.g.- colloidal particles, polymers, biological samples etc.) | DDM is an optical microscopy method that can use high speed imaging with data analysis to analyze soft active media (e.g.- colloidal particles, polymers, biological samples etc.) | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-416 | The On-board Artificial Intelligence Research (OnAIR) Platform | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The On-board Artificial Intelligence Research (OnAIR) Platform is a generalized software framework for performing AI research. | OnAIR is intended to be used by Artificial Intelligence researchers that are exploring how to apply leading edge Artificial Intelligence research to flight software in a simulated environment. For example, a researcher may use OnAIR to develop a proof of concept demonstrating that a specialized technique can diagnose satellite faults in a simulated environment. OnAIR is not intended to be used in flight software. | It provides a flight software analog capable of streaming live telemetry to and receiving commands from experimental code. | It provides a flight software analog capable of streaming live telemetry to and receiving commands from experimental code. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-417 | Troupe Rover Demonstration | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | The Troupe Rover demonstration project is implementing an algorithm called "SafeMAP" or Safe Multi-Agent Planner. SafeMAP is used in distributed multi-rover or multi-uav system to plan the actions of a group of collaborating agents working on a shared goal. | The Troupe Rover demonstration project is implementing an algorithm called "SafeMAP" or Safe Multi-Agent Planner. SafeMAP is used in distributed multi-rover or multi-uav system to plan the actions of a group of collaborating agents working on a shared goal. | action plans | action plans | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-418 | Manager for Intelligent Knowledge Access (MIKA) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Manager for Intelligent Knowledge Access (MIKA) is a Python toolkit specialized for state-of-the-art knowledge discovery and information retrieval for technical documents. | Intended for supporting early design hazard analysis, MIKA provides multiple natural language processing techniques and aims to enable rapid application of these different techniques to a relevant dataset. | Users can retrieve incident reports from a domain relevant to a design, extract themes in the dataset, and uncover trends. Results can be presented in multiple formats, including tables and as a Failure Modes and Effects Analysis (FMEA)-style analysis. | Users can retrieve incident reports from a domain relevant to a design, extract themes in the dataset, and uncover trends. Results can be presented in multiple formats, including tables and as a Failure Modes and Effects Analysis (FMEA)-style analysis. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-419 | AdaStress | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | AdaStress, a tool for finding and analyzing the likeliest failures in a simulated system under test. | AdaStress, a tool for finding and analyzing the likeliest failures in a simulated system under test. | finding and analyzing the likeliest failures in a simulated system under test. | finding and analyzing the likeliest failures in a simulated system under test. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-420 | Rover ML-based vision system | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | The rover uses deep neural networks perform perception, currently related to line following. | The rover uses deep neural networks perform perception, currently related to line following. | The rover uses deep neural networks perform perception, currently related to line following. | The rover uses deep neural networks perform perception, currently related to line following. | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-426 | ImageLabeler | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Web-based Collaborative Machine Learning Training Data Generation Tool | Web-based Collaborative Machine Learning Training Data Generation Tool | Web-based Collaborative Machine Learning Training Data Generation Tool | Web-based Collaborative Machine Learning Training Data Generation Tool | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-428 | Oceanic cold pool detection using scatterometer winds | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Use a U-Net convolutional neural network to detect oceanic cold pools in spaceborne scatterometer wind observations. | Use a U-Net convolutional neural network to detect oceanic cold pools in spaceborne scatterometer wind observations. | oceanic cold pools | oceanic cold pools | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-429 | Similarity Search for Earth Science Image Archive | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Self Supervised Based Learning approach to search image archives using a query image | Self Supervised Based Learning approach to search image archives using a query image | similar images | similar images | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-443 | PASSION Computer | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | To develop a pipeline for processing raw overlappagram (other word?) images through the various data products. To produce the final data product in a timely manner. | To develop a pipeline for processing raw overlappagram (other word?) images through the various data products. To produce the final data product in a timely manner. | data product | data product | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-445 | Instantaneous Clarity of Ambient eNvironment Capability (ICAN-C) | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Generative AI | To determine the applicability of using AI/ML to correct obscured vision. Landing systems where sedement is blown around and/or dusty environments are the primary application. | Improving safety in instances of obscurred vision | Output is clearer image | 08/01/2025 | b) Developed in-house | No | Output is clearer image | Open source image datasets and custom lunar lander datasets | No | No | |||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-447 | Trajectory mission design | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We use ML for a lot of different things. First, and most importantly, we use ML to estimate initial conditions for POST and Copernicus for mission design. We've also used ML for investigatory work trying to decipher what data is telling us and exploring datasets for root-cause analysis. | We use ML for a lot of different things. First, and most importantly, we use ML to estimate initial conditions for POST and Copernicus for mission design. We've also used ML for investigatory work trying to decipher what data is telling us and exploring datasets for root-cause analysis. | We use ML for a lot of different things. First, and most importantly, we use ML to estimate initial conditions for POST and Copernicus for mission design. We've also used ML for investigatory work trying to decipher what data is telling us and exploring datasets for root-cause analysis. | We use ML for a lot of different things. First, and most importantly, we use ML to estimate initial conditions for POST and Copernicus for mission design. We've also used ML for investigatory work trying to decipher what data is telling us and exploring datasets for root-cause analysis. | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-449 | TAPAS (Track Augmentation and Performance Analysis System) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | TAPAS is a tool for automating the comparison of DSN telemetry data, designed to help DSN operators find potential anomalies faster and more efficiently. | It has three autonomous functions: an automatic comparative analysis of a given track against a set of historical tracks, anomaly detection compared to a reference normal track, and statistical difference comparison between two given tracks. By using TAPAS, DSN operators can quickly identify discrepancies in the telemetry data and respond to them more effectively. | anomalies | anomalies | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-450 | Onboard Planner for Mars2020 Rover (Perseverance) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Agentic AI | The M2020 onboard scheduler incrementally constructs a feasible schedule by iterating through activities in priority-first order. | Research, experiments, and engineering to empower future rovers with onboard autonomy; planning, scheduling & execution; path planning; onboard science; image processing; terrain classification; fault diagnosis; and location estimation. This is a multi-faceted effort and includes experimentation and demonstrations on-site at JPL's simulated mars navigation yard. | When considering each activity it computes the valid time intervals for placement, taking into account preheating, maintenance heating, and wake/sleep of the rover as required. After an activity is placed (other than a preheat/maintenance or wake/sleep), the activity is never reconsidered by the scheduler for deletion or moving. Therefore the scheduler can be considered non backtracking, and only searches in the sense that it computes valid timeline intervals for legal activity placement. | 07/01/2020 | b) Developed in-house | No | When considering each activity it computes the valid time intervals for placement, taking into account preheating, maintenance heating, and wake/sleep of the rover as required. After an activity is placed (other than a preheat/maintenance or wake/sleep), the activity is never reconsidered by the scheduler for deletion or moving. Therefore the scheduler can be considered non backtracking, and only searches in the sense that it computes valid timeline intervals for legal activity placement. | terrain input | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-451 | SensorWeb: Volcano, Flood, Wildfire, and others. | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The Sensor Web Project uses a network of sensors linked by software and the internet to an autonomous satellite observation response capability. | This system of systems is designed with a flexible, modular, architecture to facilitate expansion in sensors, customization of trigger conditions, and customization of responses. This system has been used to implement a global surveillance program to study volcanos. We have also run sensorweb tests to study flooding, cryosphere events, and atmospheric phenomena. Specifically, in our application, we use low resolution, high coverage sensors to trigger observations by high resolution instruments. Note that there are many other rationales to network sensors into a sensorweb. For example automated response might enable observation using complementary instruments such as imaging radar, infra-red, visible, etc. Or automated response might be used to apply more assets to increase the frequency of observation to improve the temporal resolution of available data. Our sensorweb project is being used to monitor the Earth's 50 most active volcanos. We have also run sensorweb experiments to monitor flooding, wildfires, and cryospheric events (snowfall and melt, lake freezing and thawing, sea ice formation and breakup.) | Identification and labelling of terrain, climate and weather features | 01/01/2003 | b) Developed in-house | No | Identification and labelling of terrain, climate and weather features | Sensor data | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-452 | Mexec Onboard Planning and Execution | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | MEXEC is a lightweight, multi-mission software for activity scheduling and execution developed to increase the autonomy and efficiency of a robotic explorer. MEXEC was first created as a prototype demonstration for the Europa Clipper project as a potential solution to fail-operational requirements. | Instead of command sequences, MEXEC works with task networks, which include abstract representations of command behavior, constraints on timing, and resources required and/or consumed by the behavior. Using this knowledge on-board, MEXEC can monitor command behavior and react to off-nominal outcomes (e.g. CPU reset), reconstructing command sequences to continue spacecraft operations without jeopardizing spacecraft safety. | Activity Plan/Schedule | Activity Plan/Schedule | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-454 | Requirement Alignment (REQAL) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Use NLP and large language Model (LLM) such as GPT or BERT to encode experiences of the personnel into a knowledge graph. Then train a graph neural network (GNN) to automatically match solicitations to experts in the organization. | Use NLP and large language Model (LLM) such as GPT or BERT to encode experiences of the personnel into a knowledge graph. Then train a graph neural network (GNN) to automatically match solicitations to experts in the organization. | solicitations | solicitations | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-456 | Robust, Explainable Autonomy for Scientific Icy Moon Operations (REASIMO) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This effort aims to improve the science yield and robustness of a wide range of NASA missions by increasing the level of flight-qualifiable autonomy that can be applied to such operations. | Specifically, we are developing autonomy for onboard mission use that could detect, diagnose, and respond appropriately to anomalies (faults, failures degradations, or unexpected conditions) without the need to always drop into safe mode and call home. | Specifically, we are developing autonomy for onboard mission use that could detect, diagnose, and respond appropriately to anomalies (faults, failures degradations, or unexpected conditions) without the need to always drop into safe mode and call home. | Specifically, we are developing autonomy for onboard mission use that could detect, diagnose, and respond appropriately to anomalies (faults, failures degradations, or unexpected conditions) without the need to always drop into safe mode and call home. | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-475 | Friction Stir Weld Control | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | The AI agent monitors the telemetry and supplements the human controlling the Self-Reacting Friction Stir Weld by sending updated trim commands to fine tune the weld parameters to keep the calculated power within the researched and specified band as conditions change. | The SR-FSW has a lower defect rate reducing rework, cost, and number of test welds, providing more resilient structures. | The AI agent outputs updated trim commands sent to the welding machine. A log of all input data and internal data is created. | The AI agent outputs updated trim commands sent to the welding machine. A log of all input data and internal data is created. | |||||||||||||||||||||
| National Aeronautics And Space Administration | JSC: Johnson Space Center | NASA-487 | AI Digital Assistants (HRP) [was "Doc in a Box" for Earth Independent Medical Operations (EIMO)] | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | b) Presumed high-impact but determined not high-impact | Not high-impact | Digital assistant does not take any action, only gives information for a crew to act upon. Informational purposes only. | Agentic AI | A software agent that can perform tasks or services for an individual based on commands or questions. These commands can be given through text, voice, or other interfaces. DAs leverage artificial intelligence (AI), natural language processing (NLP) and machine learning (ML) to understand user requests, learn from interactions and provide increasingly personalized and accurate responses. | Improved cost savings, time savings, and process improvement in flight-related procedures related to medical conditions during flight. | Outputs recommendations for a human on possible activities to remediate medical conditions, for example while in space. | 10/01/2024 | b) Developed in-house | Yes | Outputs recommendations for a human on possible activities to remediate medical conditions, for example while in space. | Publicly available data sources related to medical conditions. | No | Yes | ||||||||||||||
| National Aeronautics And Space Administration | JSC: Johnson Space Center | NASA-488 | Data Fracking (HRP) | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Software systems and methods that leverage artificial intelligence methodologies to automate, enhance and streamline various processes related to documents. The core purpose of an AI document solution is to make document-driven tasks more efficient and accurate, while also providing valuable insights, all by minimizing manual labor and applying sophisticated analytical capabilities. | Improved cost savings, time savings, and process improvement in daily tasks related to scientific research. | Data related to searches performed which are pulled from input documents, including conversion from old format documents into newer formats (for example, images of documents to Excel). | 10/01/2024 | b) Developed in-house | Yes | Data related to searches performed which are pulled from input documents, including conversion from old format documents into newer formats (for example, images of documents to Excel). | Publicly available data sources related to scientific research or daily operational procedures. | No | No | |||||||||||||||
| National Aeronautics And Space Administration | WS: White Sands Test Facility | NASA-490 | AIML for Code Review and Workforce Augmentation (converted to collective) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Science | Retired | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | AI Studio API to GIT type Repositories for Code Review, Debating Agent implementation, and automation. | AI Studio API to GIT type Repositories for Code Review, Debating Agent implementation, and automation. | code reviews | code reviews | |||||||||||||||||||||
| National Aeronautics And Space Administration | HQ: Headquarters | NASA-491 | OCFO External Performance Reporting | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Gen AI uses performance data provided by MDs to write a draft summary performance report | Gen AI uses performance data provided by MDs to write a draft summary performance report | Written Summary of MD-provided performance data in a formal report style | Written Summary of MD-provided performance data in a formal report style | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-493 | Reinforcement Learning for Lunar Lander Real Time Optimal Trajectory | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Reinforcement Learning | The proposal's objective is to apply RL for real-time optimal guidance and control in spacecraft systems, aiming to enhance efficiency, robustness, and autonomy. | The proposal's objective is to apply RL for real-time optimal guidance and control in spacecraft systems, aiming to enhance efficiency, robustness, and autonomy. | real-time optimal guidance and control in spacecraft systems | real-time optimal guidance and control in spacecraft systems | |||||||||||||||||||||
| National Aeronautics And Space Administration | HQ: Headquarters | NASA-494 | OCFO Credit Risk and Defaul Estimation Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Machine learning based estimation tool for company default risk | Machine learning based estimation tool for company default risk | default risk | default risk | |||||||||||||||||||||
| National Aeronautics And Space Administration | HQ: Headquarters | NASA-495 | OCFO President Budget Request RAG | a) Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | A retrieval augmented generation tool that can answer questions based on president's budget requests | A retrieval augmented generation tool that can answer questions based on president's budget requests | answers | answers | |||||||||||||||||||||
| National Aeronautics And Space Administration | AFRC: Armstrong Flight Research Center | NASA-498 | Borescope Artificial Intelligence (BAI) | a) Pre-deployment – The use case is in a development or acquisition status. | Other (use other text field) | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | The BAI will aid jet engine maintainers in identifying compressor and turbine blade defects | Advanced borescopes would lead to an increased defect capture rate in less maintenance time, keeping engines on wing longer with more confidence in their health. | The system will output areas of potential engine damage to the user to evaluate | The system will output areas of potential engine damage to the user to evaluate | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-499 | Coupled Loads Analysis Research Assistant (CLARA) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Information Technology | Retired | c) Not high-impact | Not high-impact | Generative AI | chat assistant to support loads analysts during computation. | time savings for loads analysts | RAG based on loads documents | RAG based on loads documents | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-505 | NASA E-Nose for the detection of disease using breath analysis | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Develop one or more AI models that can classify disease related biosignatures from the data produced by the Ames NASA E-Nose when exposed to exhaled human breath. | Develop one or more AI models that can classify disease related biosignatures from the data produced by the Ames NASA E-Nose when exposed to exhaled human breath. | classification | classification | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-506 | SERVIR Applied Deep Learning Book | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The focus of the Applied Deep Learning Book is to provide practitioners with a wide variety of applied examples of Remote Sensing Deep Learning approaches. With each chapter focusing on a specific problem set such as object detection of downscaling using Deep Learning. Additionally, throughout the books chapters various examples are provided spanning the aforementioned thematic areas. | Thereby providing a wide variety of thematic applications to complement reader’s domain specific practical knowledge such as agronomy or forestry etc. We suspect readers are coming to this virtual book with preexisting geospatial expertise. However, limited Deep Learning knowledge and application specifically around environmental and Remote Sensing oriented challenges. | Data Preparation, Semantic Segmentation, Object Detection, Time Series, Ecological Process Simulations, Transfer Learning, Fusion, Downscaling | Data Preparation, Semantic Segmentation, Object Detection, Time Series, Ecological Process Simulations, Transfer Learning, Fusion, Downscaling | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-508 | Atomistic Force Field Accelerator (ARFFA) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Training of machine learning models of physically informed neural network interatomic force fields for molecular dynamics simulations of material systems of interest at the atomic level | Training of machine learning models of physically informed neural network interatomic force fields for molecular dynamics simulations of material systems of interest at the atomic level | molecular dynamics simulations of material systems of interest at the atomic level | molecular dynamics simulations of material systems of interest at the atomic level | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-512 | Advanced Air Mobility Digital Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | A digital assistant using open source large language models designed to assist Systems Engineerins with navigating the complex web of requirements surrounding Urban Air Mobility. The system also utilizes a RAG approach to recommend relevant FAA requirements, regulations, and conops to the user. | A digital assistant using open source large language models designed to assist Systems Engineerins with navigating the complex web of requirements surrounding Urban Air Mobility. The system also utilizes a RAG approach to recommend relevant FAA requirements, regulations, and conops to the user. | recommend relevant FAA requirements, regulations, and conops to the user. | recommend relevant FAA requirements, regulations, and conops to the user. | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-515 | Large language models for food security outlooks | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | USAID Famine Early Warning Network (FEWSNET) produces food security outlooks and bulletins that can complement the perspective of Earth observations. We are exploring how large language models can facilitate the interpretation of these bulletins and outlooks for an integrated approach of food security informed from the perspectives of ground-level reports and from satellites. | USAID Famine Early Warning Network (FEWSNET) produces food security outlooks and bulletins that can complement the perspective of Earth observations. We are exploring how large language models can facilitate the interpretation of these bulletins and outlooks for an integrated approach of food security informed from the perspectives of ground-level reports and from satellites. | Interpretation of these bulletins and outlooks for an integrated approach of food security | Interpretation of these bulletins and outlooks for an integrated approach of food security | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-519 | Foundation model for Weather and Climate | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Custom network design that uses MERRA data to learn the atmospheric processes. The Foundation Model can then be adapted to for many different applications including downscaling and parameterization | Custom network design that uses MERRA data to learn the atmospheric processes. The Foundation Model can then be adapted to for many different applications including downscaling and parameterization | The Foundation Model can then be adapted to for many different applications including downscaling and parameterization | The Foundation Model can then be adapted to for many different applications including downscaling and parameterization | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-520 | Foundation Model for Helio Physics | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | AI Foundation for Heliophysics to enable the research community build AI application on the SDO data sets | AI Foundation for Heliophysics to enable the research community build AI application on the SDO data sets | AI Foundation for Heliophysics to enable the research community build AI application on the SDO data sets | AI Foundation for Heliophysics to enable the research community build AI application on the SDO data sets | |||||||||||||||||||||
| National Aeronautics And Space Administration | GRC: Glenn Research Center | NASA-526 | Design Optimization of Turbomachinery Rotor Blades using Neural Network Surrogate Models | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A sample is made of a design space using Latin hypercube sampling. The geometry for these samples is then generated and evaluated using simulation tools. The result is then used to train a neural network for use as a surrogate model in design optimization. | Early work shows R^2 > 0.99 (compared to FEA results) on both test and validation datasets when predicting structural performance of a rotor blade as a function of 6 design variables | Current models provide predictions for the margin of safety associated with a rotor blade geometry, enabling faster design optimization than using FEA alone. | 05/01/2025 | b) Developed in-house | Yes | Current models provide predictions for the margin of safety associated with a rotor blade geometry, enabling faster design optimization than using FEA alone. | Data consists of Finite Element Analysis (FEA) simulations of rotor blades, generated with ANSYS Mechanical. Solutions were split into separate training, validation, and test datasets. | No | No | |||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-527 | Spaceport Throughput Analysis Resource | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The Spaceport Throughput Analysis Resource (STAR) assesses whether a set proposed launches and associated activities can be performed given the resources KSC currently has available, and external constraints imposed on KSC operations. | The Spaceport Throughput Analysis Resource (STAR) assesses whether a set proposed launches and associated activities can be performed given the resources KSC currently has available, and external constraints imposed on KSC operations. | whether a set proposed launches and associated activities | whether a set proposed launches and associated activities | |||||||||||||||||||||
| National Aeronautics And Space Administration | JSC: Johnson Space Center | NASA-528 | Deep Learning Inhale Breath Detection Model to Support CO2 Washout Suit TestingInspired CO2 Algorithm for Respiration User Signals (ICARUS): A Deep Learning Model to Enable Real-Time and Post-Test ppCO2 Inspired Calculations | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A Deep Learning Recurrent Neural Network (RNN) Long Short Term Memory (LSTM) model is being developed for determining the start and end of inhale breaths for CO2 washout suit testing. | A Deep Learning Recurrent Neural Network (RNN) Long Short Term Memory (LSTM) model is being developed for determining the start and end of inhale breaths for CO2 washout suit testing. | A Deep Learning Recurrent Neural Network (RNN) Long Short Term Memory (LSTM) model is being developed for determining the start and end of inhale breaths for CO2 washout suit testing. | A Deep Learning Recurrent Neural Network (RNN) Long Short Term Memory (LSTM) model is being developed for determining the start and end of inhale breaths for CO2 washout suit testing. | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-529 | SVM Classification of Write-in Topics from MSD Customer Sat Survey | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Support Vector Machine used to analyze write-in feedback from annual NASA MSD Customer Sat Survey, to classify comments into roughly 30 topics for visualization by Voice of the Customer Explorer (VOCE) dashboard | AIML lets us surface insights from MSD Customer Sat Survey write-in comments that otherwise would be ignored. | Classification of comments into a taxonomy of themes | Classification of comments into a taxonomy of themes | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-530 | Solar Sail Shape Reconstruction | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The intended use of this model is to predict the shape of a solar sail in flight. | The intended use of this model is to predict the shape of a solar sail in flight. | The predicted sail shape as described by several high-level parameters (e.g., boom tip deflection out of the sail plane). | The predicted sail shape as described by several high-level parameters (e.g., boom tip deflection out of the sail plane). | |||||||||||||||||||||
| National Aeronautics And Space Administration | GISS: Goddard Institute for Space Studies | NASA-535 | ATLAS | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Atlas is a RAG-based solution for answering questions about NASA standards, directives, and Space Act Agreements. It uses publicly available NASA standards, directives and Space Ac Agreements. Users ask questions to Atlas which then fines relevant excerpts from the corpus of NASA public data in order to provide answers. | The framework developed for this may be applied to other high interest data sets. | answering questions about NASA standards, directives, and Space Act Agreements | answering questions about NASA standards, directives, and Space Act Agreements | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-537 | NASA-GPT: Retrieval-Augmented Generation using NTRS and more | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This use case is an example of Retrieval-Augmented Generation (RAG), which roughly speaking is an AI-augmented form of search. It acts much like ChatGPT or one of the other services except that it also provides references. | This use case is an example of Retrieval-Augmented Generation (RAG), which roughly speaking is an AI-augmented form of search. It acts much like ChatGPT or one of the other services except that it also provides references. | answers | answers | |||||||||||||||||||||
| National Aeronautics And Space Administration | HQ: Headquarters | NASA-538 | Hierarchical U.N. Standard Product and Service Code Classification through logistic regression with the scikit-learn library | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Aid in UNSPSC classification** of SEWP Contract Line Item Numbers (CLIN). | Aid in UNSPSC classification** of SEWP Contract Line Item Numbers (CLIN). | Outputs: The single or multiple most likely UNSPSC values for a given CLIN's data. | Outputs: The single or multiple most likely UNSPSC values for a given CLIN's data. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-539 | ChatGSFC | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Generative AI | ChatGSFC is an AI-powered chatbot designed for Goddard Space Flight Center personnel Powered by a state-of-the-art large language model (currently Claude 3.5 Sonnet) Capable of assisting with a wide range of tasks: General questions and quick research Document editing Code explanation and generation Data analysis | Increased productivity Quick access to information Assistance with complex problem-solving | Potential use cases across multiple domains: Engineering tasks Scientific research Project management Technical writing Administrative support Bring your own data up to FISMA-Moderate/CUI ITAR/EAR/PII data not allowed See the FAQ section below for more info Designed to enhance daily operations and projects at GSFC | c) Developed with both contracting and in-house resources | Element 84 | Yes | Potential use cases across multiple domains: Engineering tasks Scientific research Project management Technical writing Administrative support Bring your own data up to FISMA-Moderate/CUI ITAR/EAR/PII data not allowed See the FAQ section below for more info Designed to enhance daily operations and projects at GSFC | ChatGSFC | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-540 | General AI LLM Chatbot for NASA CUI Data: Accelerated Langley Transformation Initiative Research Assistant (ALTIRA) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Accelerated Langley Transformation Initiative Research Assistant (ALTIRA) is an artificial intelligence (AI) chatbot planned for general-purpose use for processing NASA CUI (not ITAR) data. | Accelerated Langley Transformation Initiative Research Assistant (ALTIRA) is an artificial intelligence (AI) chatbot planned for general-purpose use for processing NASA CUI (not ITAR) data. | The first phase of this project will establish a minimum-viable-product utilizing Anthropic Claude 3.5 - based large language model (LLM) AI chatbot as quickly as possible to meet NASA needs. Phase 2 will develop a more sustainable and feature-rich multi-model and multi-modal implementation. This effort is presently led and funded by LaRC with consultation with and technical assistance from personnel at HQ, GSFC, GRC, ARC, and JSC. It will be available to all NASA personnel. LaRC has funded initial establishment, but insufficient resources are available to sustain this as an Agency-wide capability. This project leverages the Langley Transformation Initiative (LTI) sandbox account on Mission Cloud Platform (MCP) / Amazon Web Services (AWS) Commercial. | The first phase of this project will establish a minimum-viable-product utilizing Anthropic Claude 3.5 - based large language model (LLM) AI chatbot as quickly as possible to meet NASA needs. Phase 2 will develop a more sustainable and feature-rich multi-model and multi-modal implementation. This effort is presently led and funded by LaRC with consultation with and technical assistance from personnel at HQ, GSFC, GRC, ARC, and JSC. It will be available to all NASA personnel. LaRC has funded initial establishment, but insufficient resources are available to sustain this as an Agency-wide capability. This project leverages the Langley Transformation Initiative (LTI) sandbox account on Mission Cloud Platform (MCP) / Amazon Web Services (AWS) Commercial. | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-543 | Aerial AId: Visual data processing with AI in emergency scenarios for quick decision making | a) Pre-deployment – The use case is in a development or acquisition status. | Emergency Management | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | The Aerial AId activity enables the use of perception engines – primarily classification and object detection algorithms – in small uncrewed aerial systems (sUAS) for sUAS medical emergency response service providers. The goal of Aerial AId is to demonstrate a prototype optimized and trained perception engine, application specific datasets, and assurance framework. | This work will specifically enable stakeholders in the commercial sUAS sector that are well positioned to develop sUAS concepts for emergency medical response applications. | This project aims to provide the initial steps to safely enabling artificial intelligence/machine learning (AI/ML) technology for aerial emergency medical response with the potential to drastically improve emergency medical operations by increasing efficiency, reducing mission time, and reducing the strain on humans involved in the operations. The vision for a future fully automated capability addresses the primary challenge to adoption for sUAS into the emergency medical response sector, and this technology development overlaps significantly with ARMD goals by creating a framework for assured autonomy. | This project aims to provide the initial steps to safely enabling artificial intelligence/machine learning (AI/ML) technology for aerial emergency medical response with the potential to drastically improve emergency medical operations by increasing efficiency, reducing mission time, and reducing the strain on humans involved in the operations. The vision for a future fully automated capability addresses the primary challenge to adoption for sUAS into the emergency medical response sector, and this technology development overlaps significantly with ARMD goals by creating a framework for assured autonomy. | |||||||||||||||||||||
| National Aeronautics And Space Administration | OSMA: Office of the Chief Safety & Mission Assurance | NASA-545 | Implement machine learning in NMIS to assist with mishap coding | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Leverage Machine Learning to train computers to review mishap and close call data in NMIS to suggest event coding characteristics to help mishap program managers classify events more efficiently. | Reduce manual work, increase time and efficiency, and free up skilled subject matter experts for more complex tasks. | Suggested mishap findings and mishap event classifications. | b) Developed in-house | Yes | Suggested mishap findings and mishap event classifications. | Mishap event data, finding data, corrective action data, mishap classification data | No | Yes | ||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-546 | XR Enabling Technologies for Hardware Integration & Test Process Improvement | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This project will integrate speech-to-text and large language model functionality into the Mixed Reality Engineering Toolkit (MRET) to enable hands-free control and a natural-language interface for the program. | XR Enabling Technologies for Hardware Integration & Test Process Improvement | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-555 | DSM Autonomy: Capability Integration/Demonstration and Mission Resilience | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | DSM Autonomy: Capability Integration/Demonstration and Mission Resilience | DSM Autonomy: Capability Integration/Demonstration and Mission Resilience | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-556 | Implementing Intelligent Extensible Mission Architectures (IEMA) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Other | Implementing Intelligent Extensible Mission Architectures (IEMA) | Implementing Intelligent Extensible Mission Architectures (IEMA) | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-558 | A diagnostic package to facilitate and enhance chemical mechanism implementations within regional and global atmospheric chemistry models | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A diagnostic package to facilitate and enhance chemical mechanism implementations within regional and global atmospheric chemistry models | A diagnostic package to facilitate and enhance chemical mechanism implementations within regional and global atmospheric chemistry models | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-559 | A machine learning tool to assist the validation of HPLC analysis of phytoplankton pigments at NASA GSFC | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A machine learning tool to assist the validation of HPLC analysis of phytoplankton pigments at NASA GSFC | A machine learning tool to assist the validation of HPLC analysis of phytoplankton pigments at NASA GSFC | Convolutional neural networks | Convolutional neural networks | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-560 | A Neural Network Retrieval for Aerosol Optical Depth | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A Neural Network Retrieval for Aerosol Optical Depth | A Neural Network Retrieval for Aerosol Optical Depth | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-561 | Adding Blowing Snow Diagnosis to the GEOS Capability for Improved Surface Mass Balance and Weather Applications | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Adding Blowing Snow Diagnosis to the GEOS Capability for Improved Surface Mass Balance and Weather Applications | Adding Blowing Snow Diagnosis to the GEOS Capability for Improved Surface Mass Balance and Weather Applications | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-562 | Aerosol and Cloud Detection Using Machine Learning Algorithms and Space-Based Lidar Data | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Aerosol and Cloud Detection Using Machine Learning Algorithms and Space-Based Lidar Data | Aerosol and Cloud Detection Using Machine Learning Algorithms and Space-Based Lidar Data | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-563 | AI Foundation Model for Weather and Climate | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | AI Foundation Model for Weather and Climate | AI Foundation Model for Weather and Climate | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-564 | AI Prediction Algorithms for Autonomous Science and Operations | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | AI Prediction Algorithms for Autonomous Science and Operations | AI Prediction Algorithms for Autonomous Science and Operations | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-565 | Algorithm developments for synergistic use of water quality products from multispectral and hyperspectral satellite observations | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Algorithm developments for synergistic use of water quality products from multispectral and hyperspectral satellite observations | Algorithm developments for synergistic use of water quality products from multispectral and hyperspectral satellite observations | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-566 | An Urban Information System to Assess Neighborhood Climate Risk and Daily Exposures in Cities | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Developing ML models (e.g., random forests approaches) to capture climate responses in urban settings within NYC | Shows how NASA tools can inform risk management in cities, with direct interest and engagement from local governments, community groups, and private sector partners. Many of these tools are in development (low TRL/ARL) but show great promise for wider improvements and applications. | AI system is being used to develop flood models and heat exposure models for NYC. Outputs include flood depths and flood frequency analysis, as well as heat exposure for standard routes of walking/cycling/jogging through the city. | AI system is being used to develop flood models and heat exposure models for NYC. Outputs include flood depths and flood frequency analysis, as well as heat exposure for standard routes of walking/cycling/jogging through the city. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-567 | Analysis of GES DISC datasets through NLP and TF-IDF techniques analyzing publications | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Analysis of GES DISC datasets through NLP and TF-IDF techniques analyzing publications | Analysis of GES DISC datasets through NLP and TF-IDF techniques analyzing publications | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-569 | Applying machine learning in improving parameters in the global river routing model | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Applying machine learning in improving parameters in the global river routing model | Applying machine learning in improving parameters in the global river routing model | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-570 | Applying Machine Learning to Earth Ozone Retrieval: Improving Scattering Calculations and Correcting Systematic Errors | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Applying Machine Learning to Earth Ozone Retrieval: Improving Scattering Calculations and Correcting Systematic Errors | Applying Machine Learning to Earth Ozone Retrieval: Improving Scattering Calculations and Correcting Systematic Errors | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-575 | Automated Quality Control Scheme for GPM Satellite Precipitation Products | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Automated Quality Control Scheme for GPM Satellite Precipitation Products | Automated Quality Control Scheme for GPM Satellite Precipitation Products | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-577 | Automated segmentation of chemical and mineralogical maps | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Automated segmentation of chemical and mineralogical maps | Automated segmentation of chemical and mineralogical maps | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-578 | Autonomous Network Security | a) Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Autonomous Network Security | Autonomous Network Security | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-579 | Autonomous Obstacle Avoidance of Mobile Robots using Reinforcement Learning | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Autonomous Obstacle Avoidance of Mobile Robots using Reinforcement Learning | Autonomous Obstacle Avoidance of Mobile Robots using Reinforcement Learning | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-580 | Autonomus Path Planning of Robot Manipulators using Reinforcement Learning | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Autonomus Path Planning of Robot Manipulators using Reinforcement Learning | Autonomus Path Planning of Robot Manipulators using Reinforcement Learning | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-581 | Background matrix Generation with Stable Diffusion | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Background matrix Generation with Stable Diffusion | Background matrix Generation with Stable Diffusion | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-582 | Bias correction of fluorescence retrievals with neural networks | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Bias correction of fluorescence retrievals with neural networks | Bias correction of fluorescence retrievals with neural networks | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-583 | Bias correction of GEOS S2S forecast using machine learning | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Bias correction of GEOS S2S forecast using machine learning | Bias correction of GEOS S2S forecast using machine learning | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-584 | Bias correction to GOES-R PM2.5 for US EPA AirNow system | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Bias correction to GOES-R PM2.5 for US EPA AirNow system | Bias correction to GOES-R PM2.5 for US EPA AirNow system | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-585 | Bidirectional water leaving signal extraction and correction using neural networks | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Bidirectional water leaving signal extraction and correction using neural networks | Bidirectional water leaving signal extraction and correction using neural networks | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-587 | Boosting Inversion Research and Data Processing at GSFC with AI | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Boosting Inversion Research and Data Processing at GSFC with AI | Boosting Inversion Research and Data Processing at GSFC with AI | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-588 | Building On Fundamentals: Reducing Power and Improving Performance With Intelligent Radiometers | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Building On Fundamentals: Reducing Power and Improving Performance With Intelligent Radiometers | Building On Fundamentals: Reducing Power and Improving Performance With Intelligent Radiometers | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-589 | CALET Cosmic Ray Data Analysis | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | CALET Cosmic Ray Data Analysis | CALET Cosmic Ray Data Analysis | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-597 | Code Assistant Pilot Study | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Link prediction | Dataset Recommender | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-598 | Dataset Recommender | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Link prediction | Deep learning for Environmental and Ecological Prediction-eValuation and Insight with Ensembles of Water quality (DEEP-VIEW) | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-599 | Deep learning for Environmental and Ecological Prediction-eValuation and Insight with Ensembles of Water quality (DEEP-VIEW) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Modular framework for multi-sensor segmentation | Demonstrate High-Level Space Mission Requirement Flowdown to Software Requirements Using Generative Artificial Intelligence (AI) | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-600 | Demonstrate High-Level Space Mission Requirement Flowdown to Software Requirements Using Generative Artificial Intelligence (AI) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Automated identification of cloud ice crystal types and supercooled liquid | Determining Cloud Particle Types Using Backscatter Lidar Data and a Clustering Approach | Outputs include confidences/probabilities and most likely category | Outputs include confidences/probabilities and most likely category | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-601 | Determining Cloud Particle Types Using Backscatter Lidar Data and a Clustering Approach | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Automated identification of cloud ice crystal types and supercooled liquid | Significant time savings | Outputs include confidences/probabilities and most likely category | Outputs include confidences/probabilities and most likely category | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-602 | Development of a machine learning emulator for CAM-CMAQ to improve wildfire smoke forecasts | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Development of a machine learning emulator for CAM-CMAQ to improve wildfire smoke forecasts | Development of a machine learning emulator for CAM-CMAQ to improve wildfire smoke forecasts | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-604 | Development of digital twin technologies for climate prediction | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Development of digital twin technologies for climate prediction | Development of digital twin technologies for climate prediction | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-605 | Development of real-time high-resolution air quality maps through combination of model output, satellite data, and ancillary Google datasets | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Development of real-time high-resolution air quality maps through combination of model output, satellite data, and ancillary Google datasets | Development of real-time high-resolution air quality maps through combination of model output, satellite data, and ancillary Google datasets | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-607 | Dynamic Time Interpolation using Spherical Harmonic Neural Operator Models | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Dynamic Time Interpolation using Spherical Harmonic Neural Operator Models | Dynamic Time Interpolation using Spherical Harmonic Neural Operator Models | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-608 | ECCOH & "Quick Chemistry" for Atmospheric Composition and Climate Studies | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | ECCOH & "Quick Chemistry" for Atmospheric Composition and Climate Studies | ECCOH & "Quick Chemistry" for Atmospheric Composition and Climate Studies | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-610 | ESDIS VisML | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | ESDIS VisML | ESDIS VisML | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-611 | Estimate global satellite AOD from visible to ultra-violet | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Estimate global satellite AOD from visible to ultra-violet | Estimate global satellite AOD from visible to ultra-violet | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-613 | Fast neural network emulator for vector radiative transfer simulations | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Fast neural network emulator for vector radiative transfer simulations | Fast neural network emulator for vector radiative transfer simulations | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-614 | Fast radiative transfer emulation for GHG retrievals | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Fast radiative transfer emulation for GHG retrievals | Fast radiative transfer emulation for GHG retrievals | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-615 | Fast Retrieval Emulators for Instrumenting Climate Models | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Fast Retrieval Emulators for Instrumenting Climate Models | Fast Retrieval Emulators for Instrumenting Climate Models | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-616 | FluxSat Gross Primary Production - global upscaling of eddy covariance with machine learning | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | FluxSat Gross Primary Production - global upscaling of eddy covariance with machine learning | FluxSat Gross Primary Production - global upscaling of eddy covariance with machine learning | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-617 | GAIT: Generative AI Teleoperations for Robotics | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | GAIT: Generative AI Teleoperations for Robotics | GAIT: Generative AI Teleoperations for Robotics | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-618 | Generalized photometric neural network framework | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Generalized photometric neural network framework | Generalized photometric neural network framework | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-619 | Generalized time-series anomaly detection for rapid data survey of in-situ observations | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Generalized time-series anomaly detection for rapid data survey of in-situ observations | Generalized time-series anomaly detection for rapid data survey of in-situ observations | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-620 | Generating localized air quality forecasts for US embassies using ML and GEOS-FP | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Generating localized air quality forecasts for US embassies using ML and GEOS-FP | Generating localized air quality forecasts for US embassies using ML and GEOS-FP | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-621 | Giovanni GPT | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Giovanni GPT | Giovanni GPT | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-623 | Ground Safety AI | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Ground Safety AI | Ground Safety AI | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-625 | Hybrid Model to emulate GEOS physics | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Hybrid Model to emulate GEOS physics | Hybrid Model to emulate GEOS physics | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-626 | Identification of dust from MODIS/VIIRS sensors using ML | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identification of dust from MODIS/VIIRS sensors using ML | Identification of dust from MODIS/VIIRS sensors using ML | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-628 | Improving greenhouse gas retrievals using a PCA-NN technique | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Improving greenhouse gas retrievals using a PCA-NN technique | Improving greenhouse gas retrievals using a PCA-NN technique | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-629 | Improving quality of SO2 retrievals using a machine learning based analysis technique | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Improving quality of SO2 retrievals using a machine learning based analysis technique | Improving quality of SO2 retrievals using a machine learning based analysis technique | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-630 | Improving uncertainty characterization for PACE polarimeters | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Improving uncertainty characterization for PACE polarimeters | Improving uncertainty characterization for PACE polarimeters | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-631 | Integrating Research in Artificial Intelligence for Spacecraft Resilience (RAISR) into a FlatSat Hardware System | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Integrating Research in Artificial Intelligence for Spacecraft Resilience (RAISR) into a FlatSat Hardware System | Integrating Research in Artificial Intelligence for Spacecraft Resilience (RAISR) into a FlatSat Hardware System | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-632 | Investigating the vanishing martian induced magnetosphere boundary using a Machine Learning approach | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Investigating the vanishing martian induced magnetosphere boundary using a Machine Learning approach | Investigating the vanishing martian induced magnetosphere boundary using a Machine Learning approach | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-636 | Localized air quality forecasts for cities in Africa and South America | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Localized air quality forecasts for cities in Africa and South America | Localized air quality forecasts for cities in Africa and South America | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-639 | Machine Learning for Space Mission Applications | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Machine Learning for Space Mission Applications | Machine Learning for Space Mission Applications | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-640 | Machine Learning volcanic SO2 height retrievals | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Machine Learning volcanic SO2 height retrievals | Machine Learning volcanic SO2 height retrievals | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-641 | Machine-enabled modeling of terminus ablation for Greenland's outlet glaciers | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Machine-enabled modeling of terminus ablation for Greenland's outlet glaciers | Machine-enabled modeling of terminus ablation for Greenland's outlet glaciers | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-642 | Machine-Learning based Auroral Ionospheric electrodynamics Model (ML-AIM) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A suite of machine learning models for high-latitude electrodynamics | Machine-Learning based Auroral Ionospheric electrodynamics Model (ML-AIM) | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-643 | Machine-learning based radiative transfer model and LWP/IWP | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Machine-learning based radiative transfer model and LWP/IWP | Machine-learning based radiative transfer model and LWP/IWP | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-644 | MAGSTAR: Magnetometer Interference Mitigation with Statistical Decomposition and Artificial Intelligence | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | MAGSTAR: Magnetometer Interference Mitigation with Statistical Decomposition and Artificial Intelligence | MAGSTAR: Magnetometer Interference Mitigation with Statistical Decomposition and Artificial Intelligence | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-645 | Making a long-term data record of opportunistic experiments with deep learning | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Making a long-term data record of opportunistic experiments with deep learning | Making a long-term data record of opportunistic experiments with deep learning | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-646 | MATISSE: MAchine inTelligence for Small SatellitEs - application of AI to permit autonomic science triage for small, power starved satellites | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | MATISSE: MAchine inTelligence for Small SatellitEs - application of AI to permit autonomic science triage for small, power starved satellites | MATISSE: MAchine inTelligence for Small SatellitEs - application of AI to permit autonomic science triage for small, power starved satellites | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-647 | MAVEN observation of Kelvin-Helmholtz Vortices using Statistical and Machine Learning Techniques | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | MAVEN observation of Kelvin-Helmholtz Vortices using Statistical and Machine Learning Techniques | MAVEN observation of Kelvin-Helmholtz Vortices using Statistical and Machine Learning Techniques | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-648 | Message Anomaly Detection for Command and Telemetry Systems (MADCAT) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Message Anomaly Detection for Command and Telemetry Systems (MADCAT) | Message Anomaly Detection for Command and Telemetry Systems (MADCAT) | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-649 | Modeling NICER background | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Modeling NICER background | Modeling NICER background | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-650 | NAMASTE | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | NAMASTE | NAMASTE | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-651 | NASA Evolutionary Programming Analytic Center (NEPAC) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | NASA Evolutionary Programming Analytic Center (NEPAC) | NASA Evolutionary Programming Analytic Center (NEPAC) | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-652 | National Forest Mapping Old Growth Forest | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | National Forest Mapping Old Growth Forest | National Forest Mapping Old Growth Forest | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-653 | Natural Language Processing and Topic Modeling of the 25-Year General Coordinates Network (GCN) Circulars Archive | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Natural Language Processing and Topic Modeling of the 25-Year General Coordinates Network (GCN) Circulars Archive | Natural Language Processing and Topic Modeling of the 25-Year General Coordinates Network (GCN) Circulars Archive | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-654 | Navajo Nation Feature Mapping | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Navajo Nation Feature Mapping | Navajo Nation Feature Mapping | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-655 | Neural Network based observation operator to bias correct SMAP brightness temperatures for assimilation | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Neural Network based observation operator to bias correct SMAP brightness temperatures for assimilation | Neural Network based observation operator to bias correct SMAP brightness temperatures for assimilation | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-656 | Neutron star synthetic wave form generation | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Neutron star synthetic wave form generation | Neutron star synthetic wave form generation | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-657 | Next generation cloud and aerosol parameterizations for atmospheric models | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Next generation cloud and aerosol parameterizations for atmospheric models | Next generation cloud and aerosol parameterizations for atmospheric models | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-658 | Nitrogen dioxide retrieval using hyper-spectral imagers | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Nitrogen dioxide retrieval using hyper-spectral imagers | Nitrogen dioxide retrieval using hyper-spectral imagers | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-659 | Physics of neural network power-law scaling | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Physics of neural network power-law scaling | Physics of neural network power-law scaling | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-660 | Planetary Trajectory Design Using Generative AI Tools | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Planetary Trajectory Design Using Generative AI Tools | Planetary Trajectory Design Using Generative AI Tools | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-661 | Planted Area mapping in food insecure conflict zones | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Planted Area mapping in food insecure conflict zones | Planted Area mapping in food insecure conflict zones | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-662 | PM2.5 Estimation using MERRA2 and Advanced machine learning model over US | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | PM2.5 Estimation using MERRA2 and Advanced machine learning model over US | PM2.5 Estimation using MERRA2 and Advanced machine learning model over US | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-663 | PM2.5 Product development using AERONET data | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | PM2.5 Product development using AERONET data | PM2.5 Product development using AERONET data | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-664 | Prediction of Spaceflight Mass Spectrometry Chemical Information with Neural Networks | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Prediction of Spaceflight Mass Spectrometry Chemical Information with Neural Networks | Prediction of Spaceflight Mass Spectrometry Chemical Information with Neural Networks | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-665 | Prediction of whitecaps | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Prediction of whitecaps | Prediction of whitecaps | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-666 | Quantifying Uncertainty and Constraining Parameterizations of Clouds in Earth System Models using NASA Observations | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Quantifying Uncertainty and Constraining Parameterizations of Clouds in Earth System Models using NASA Observations | Quantifying Uncertainty and Constraining Parameterizations of Clouds in Earth System Models using NASA Observations | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-667 | Rangelands Water Monitoring and Forecasting System | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Rangelands Water Monitoring and Forecasting System | Rangelands Water Monitoring and Forecasting System | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-668 | Reconstruction of VIIRS Level1 geolocation data | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Reconstruction of VIIRS Level1 geolocation data | Reconstruction of VIIRS Level1 geolocation data | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-669 | Reproducing surface irradiance and penetration depth retrievals using machine learning | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Reproducing surface irradiance and penetration depth retrievals using machine learning | Reproducing surface irradiance and penetration depth retrievals using machine learning | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-670 | RST I&T Science Data Telemetry Query | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | RST I&T Science Data Telemetry Query | RST I&T Science Data Telemetry Query | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-671 | RST Image Anomaly Detection | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | RST Image Anomaly Detection | RST Image Anomaly Detection | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-672 | SatVision: Precursor for a foundation model developed using MODIS surface reflectance data | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | SatVision: Precursor for a foundation model developed using MODIS surface reflectance data | SatVision: Precursor for a foundation model developed using MODIS surface reflectance data | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-673 | Science Autonomy Applications for ExoMars/MOMA | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Science Autonomy Applications for ExoMars/MOMA | Science Autonomy Applications for ExoMars/MOMA | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-674 | Science Keyword Link Prediction | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Science Keyword Link Prediction | Science Keyword Link Prediction | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-675 | Self-supervised learning for modeling gamma-ray variability in blazars | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Self-supervised learning for modeling gamma-ray variability in blazars | Self-supervised learning for modeling gamma-ray variability in blazars | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-678 | Smart NINT | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Smart NINT | Smart NINT | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-680 | Software Issue Classification with LLM | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Software Issue Classification with LLM | Software Issue Classification with LLM | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-683 | SpRInT to Advance the SOA of Intelligent Space Systems | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | SpRInT to Advance the SOA of Intelligent Space Systems | SpRInT to Advance the SOA of Intelligent Space Systems | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-684 | Super Resolution to enhance climate reanalysis data | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Super Resolution to enhance climate reanalysis data | Super Resolution to enhance climate reanalysis data | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-685 | Super-Resolution for Nighttime Lights | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Super-Resolution for Nighttime Lights | Super-Resolution for Nighttime Lights | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-686 | Terrain Modeling and Landmark Navigation with Radiance Fields | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Terrain Modeling and Landmark Navigation with Radiance Fields | Terrain Modeling and Landmark Navigation with Radiance Fields | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-687 | Terrestrial Environmental Rapid-Replicating and Assimilation Hydrometeorological (TERRAHydro) System | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Terrestrial Environmental Rapid-Replicating and Assimilation Hydrometeorological (TERRAHydro) System | Terrestrial Environmental Rapid-Replicating and Assimilation Hydrometeorological (TERRAHydro) System | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-688 | Text to Spacecraft | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Text to Spacecraft | Text to Spacecraft | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-690 | The evaluation of clouds in R21C data via a ML-based MODIS simulator | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The evaluation of clouds in R21C data via a ML-based MODIS simulator | The evaluation of clouds in R21C data via a ML-based MODIS simulator | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-692 | Toward ice sheet surface data assimilation: Employing satellite observations and machine learning to improve model representation of ice sheet surface melt | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identify sources of error in an Earth system model component. | Improve the representation of the Earth system in diagnostic model simulations, data assimilation, and predictions. | The AI system outputs predictions of model error under a wide range of conditions. | The AI system outputs predictions of model error under a wide range of conditions. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-693 | Towards an Integrated Observation System for OH: A Case Study in the Tropics | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Towards Learning-based Visual Perception with GAVIN: the Goddard AI Verification and INtegration Tool Suite | Towards an Integrated Observation System for OH: A Case Study in the Tropics | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-694 | Towards Learning-based Visual Perception with GAVIN: the Goddard AI Verification and INtegration Tool Suite | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Towards Learning-based Visual Perception with GAVIN: the Goddard AI Verification and INtegration Tool Suite | Towards Learning-based Visual Perception with GAVIN: the Goddard AI Verification and INtegration Tool Suite | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-703 | Depth from Spectral Defocus for Lunar Regolith | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Our technique reconstructs a 3D model of a scene based on images taken from a single viewpoint -- via estimating a depth map of a scene based on the defocus of different objects in captured images-- illuminated with light of different wavelengths. This pipeline has potential applications for 3D reconstruction on upcoming lunar missions carrying a multi-spectral imaging system as payload. | Our pipeline has shown success on synthetic lunar data sets taken with the Ames Imaging Module (NIRVSS-AIM) camera system. | 3D model of a scene | 3D model of a scene | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-704 | Neural Scene Representations for Lunar Terrain Modeling | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | We have developed LunarNRM, a novel neural surface reconstruction algorithm based on Neural Radiance Fields (NeRFs) that incorporates shadow-aware and depth-aware methodologies. | By integrating multi-sensor data from the Lunar Reconnaissance Orbiter (LRO)—specifically optical data from the Narrow Angle Camera (NAC) and altimeter data from the Lunar Orbiter Laser Altimeter (LOLA)—we have demonstrated that LunarNRM can effectively reconstruct these critical regions, directly supporting NASA’s Artemis campaign. | Our LunarNRM generates shadow-controlled Digital Elevation Models (DEMs) of the lunar surface, enabling accurate modeling and relighting of largely shadowed regions such as craters in the lunar south pole. | Our LunarNRM generates shadow-controlled Digital Elevation Models (DEMs) of the lunar surface, enabling accurate modeling and relighting of largely shadowed regions such as craters in the lunar south pole. | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-705 | Data Augmentation Pipeline for Zero-Shot Sim-to-Real Transfer in Vision-Based Robot Navigation | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | We developed a data augmentation pipeline to enhance the training of vision-based navigation models for robotics, addressing the challenges of limited real-world data. | By combining foundation model-based segmentation with CycleGAN for sim-to-real style transfer, our approach generates realistic, labeled images from synthetic data, enabling models to better generalize to real-world environments. This innovative capability enhances vision-based navigation tasks, such as vehicle pose estimation and road segmentation, significantly closing the sim-to-real gap in robotic perception applications. | realistic, labeled images from synthetic data | realistic, labeled images from synthetic data | |||||||||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-706 | Safe Autonomous Taxiing with Vision-Based Navigation | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | TaxiNet is a vision-based deep learning model developed to enable autonomous vehicles to follow a designated line safely during aircraft taxiing, a crucial application for assured autonomy research. This project’s goal is to demonstrate safe, closed-loop control using neural network perception from camera images, focusing on meeting rigorous safety standards for learning-enabled components (LECs) in safety-critical contexts. | TaxiNet is a vision-based deep learning model developed to enable autonomous vehicles to follow a designated line safely during aircraft taxiing, a crucial application for assured autonomy research. This project’s goal is to demonstrate safe, closed-loop control using neural network perception from camera images, focusing on meeting rigorous safety standards for learning-enabled components (LECs) in safety-critical contexts. | We developed both a physical rover for real-world testing and a simulator in Unreal Engine with a realistic NASA Ames campus model for extensive, simulation-based validation. | We developed both a physical rover for real-world testing and a simulator in Unreal Engine with a realistic NASA Ames campus model for extensive, simulation-based validation. | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-708 | Global, Seasonal Mars Frost Maps | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Global frost Martian maps derived from five remote sensing datasets and processed with tools like CNNs and other data science techniques. Publicly available on JMARS | (1) Mars seasonal frost maps that can be used by other researchers, enabling more time on analysis rather than data collection. (2) Demonstration of a deeply collaborative effort between data scientists and physical scientists. | Near-global maps showing detections of seasonal frost in visible datasets (HiRISE, CTX) with associated uncertainties. | 11/01/2024 | b) Developed in-house | No | Near-global maps showing detections of seasonal frost in visible datasets (HiRISE, CTX) with associated uncertainties. | Visible images acquired from orbit of the martian surface. Some images were human-labeled, then used in training, validation, and testing of the CNN. | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-709 | GenAI Agent for Interacting with Robots (https://github.com/nasa-jpl/rosa) | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Generative AI | ROSA is an AI agent that integrates with the ROS ecosystem to help develop and operate robots using natural language. It is able to read live telemetry data from robotics systems and formulate answers to user queries. | ROSA is an AI agent that integrates with the ROS ecosystem to help develop and operate robots using natural language. It is able to read live telemetry data from robotics systems and formulate answers to user queries. | answers to user queries | 01/08/2023 | b) Developed in-house | No | answers to user queries | live telemetry data from robotics systems | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-711 | SLIM: Software Lifecycle Improvement & Modernization | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | Use GenAI to help automate integration of software best practices, including adding appropriate documents, testing plans, tests, etc. | Use GenAI to help automate integration of software best practices, including adding appropriate documents, testing plans, tests, etc. | software best practices | 01/02/2022 | b) Developed in-house | No | software best practices | documents, testing plans, tests, etc. | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-712 | Using LLMS for Documents and Requirements Analysis | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Use GenAI to analyze documents and improve the requirements development process | Use GenAI to analyze documents and improve the requirements development process | analysis | 01/10/2024 | b) Developed in-house | No | analysis | documents, testing plans, tests, etc. | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-715 | Purchase Card Management System (PCMS) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Purchase card application uses ML model to suggest if a purchase may be a taggable asset or a chemical. | Purchase card application uses ML model to suggest if a purchase may be a taggable asset or a chemical. | Metadata tagging recommendation | b) Developed in-house | No | Metadata tagging recommendation | purchase card transactions | No | Yes | ||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-716 | New Technology and Software Reporting (NTR) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | NTR application uses ML model to suggest a Technology Category (e.g. Aerospace, Robotics) for the new technology being reported | NTR application uses ML model to suggest a Technology Category (e.g. Aerospace, Robotics) for the new technology being reported | Category classification | b) Developed in-house | No | Category classification | technology reports | No | Yes | ||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-717 | Artificial Intelligence Leveraged Information Capture and Exploration (ALICE) (ITMX) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The project will streamline the archiving of information and build an explorable knowledge graph of organizational communications. This project seeks to provide several significant benefits to the agency. | The project will streamline the archiving of information and build an explorable knowledge graph of organizational communications. This project seeks to provide several significant benefits to the agency. | The project will streamline the archiving of information and build an explorable knowledge graph of organizational communications. This project seeks to provide several significant benefits to the agency. | The project will streamline the archiving of information and build an explorable knowledge graph of organizational communications. This project seeks to provide several significant benefits to the agency. | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-718 | TxP: Artificial Intelligence for Curation (AI-Cure) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The AI-CURE project will integrate advanced AI and machine learning models to automate and standardize data curation across NASA’s SMD databases. | This will result in cost savings, faster data availability, improved data quality, and better support for NASA’s open science initiatives. This project enables NASA to advance data curation processes, accelerating scientific discovery while reducing manual effort and operational costs. | curated data | curated data | |||||||||||||||||||||
| National Aeronautics And Space Administration | JSC: Johnson Space Center | NASA-719 | TxP: Federated Data Discovery and Content Creation (FDDCC) KDP-Formulation (ITMX) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The FDDCC project is guided by a set of core objectives aimed at transforming how data is accessed, managed, and utilized across the agency. These objectives focus on enhancing data discoverability, ensuring secure access, improving system scalability, and fostering a user-centered experience. | NASA has amassed a significant trove of data that holds the potential to drive advancements across numerous domains. However, the organization faces significant challenges in harnessing this potential. Much of its valuable data remains trapped within isolated legacy systems, creating silos that hinder efficient access and use. This segmentation is further complicated by the dual locations of these systems – both in the cloud and on-premises – with each presenting its unique set of access challenges due to firewalls, security control boundaries and other barriers. Numerous distributed systems within NASA provide localized search functions for their specific datasets. These capabilities, which cater to both structured and unstructured data, highlight the latent potential and need for a comprehensive approach that can unify and streamline data and information access across the entire organization. By developing a unified search interface, implementing robust identity and access management, creating a comprehensive data catalog, and building a scalable infrastructure, the FDDCC project seeks to empower users to efficiently find, access, and manage data. Additionally, the project emphasizes user satisfaction through continuous feedback and iterative improvements, ensuring that the platform meets the evolving needs of the agency. | a unified search interface | a unified search interface | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-721 | Fast Machine Learning Lidar Surrogate Simulator: For Pristine Clear Sky | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | ML base lidar radiative transfer simulation for clear sky | ML base lidar radiative transfer simulation for clear sky | ML base lidar radiative transfer simulation for clear sky | ML base lidar radiative transfer simulation for clear sky | |||||||||||||||||||||
| National Aeronautics And Space Administration | LaRC: Langley Research Center | NASA-722 | A Neural Network Parametrization of Volumetric Cloud Fraction Profiles Using Satellite Observations and MERRA-2 Reanalysis Meteorological Data | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Cloud Parameterization to improve climate model | Cloud Parameterization to improve climate model | Cloud Parameterization to improve climate model | Cloud Parameterization to improve climate model | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-723 | Intelligent Chatbot for Science using Microsoft Copilot | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | It uses Microsoft's Large Language model with scientifically curated information from NASA's VEDA (Visualization, Exploration, and Data Analysis) platform to assist users in search, discovery and analysis. | It uses Microsoft's Large Language model with scientifically curated information from NASA's VEDA (Visualization, Exploration, and Data Analysis) platform to assist users in search, discovery and analysis. | scientifically curated information | scientifically curated information | |||||||||||||||||||||
| National Aeronautics And Space Administration | WS: White Sands Test Facility | NASA-732 | Cache-Augmented Generation Document Search (converted to collective) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Science | Retired | c) Not high-impact | Not high-impact | Generative AI | Cache-Augmented Generation Document Search | Cache-Augmented Generation Document Search | answers | answers | |||||||||||||||||||||
| National Aeronautics And Space Administration | JSC: Johnson Space Center | NASA-734 | Personal Health Information (PHI) AI Platform | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Ability to use PHI data | Ability to use PHI data | Ability to use PHI data | Ability to use PHI data | |||||||||||||||||||||
| National Aeronautics And Space Administration | HQ: Headquarters | NASA-740 | Machine Learning for Entity Matching | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This AI use case addresses a critical data quality challenge within NASA’s enterprise Salesforce platform by enabling NASA to accurately merge extremely large institutional datasets from multiple sources into a single, unified system, while identifying and preventing duplicate account entries that result from inconsistencies in naming, formatting, and metadata. With these inconsistencies across sources, traditional matching methods are insufficient to maintain a clean and reliable master list of organizations. By implementing AI-assisted entity matching and de-duplication, NASA can ensure accurate, non-redundant records that enhance reporting, support seamless user experiences, and enable the agency to scale its engagement infrastructure without compromising data integrity. | STMD is advancing this initiative in support of its broader effort to modernize how NASA collaborates with academia, industry, and government entities. Given the large number of individuals and organizations seeking to engage with NASA, it is essential that institutional records remain clean, traceable, and well-governed. This use case directly enhances STMD’s ability to manage an authoritative system of record that supports outreach, partnership tracking, and strategic engagement. By implementing AI-driven matching and de-duplication, STMD is not only enabling faster and more reliable onboarding of new institutional data, but also delivering enterprise-wide value by improving data integrity across all applications built on the shared Salesforce platform. These improvements will ensure that external users experience a seamless, accurate interface when associating with their organization—while internal teams gain more trustworthy data for programmatic planning, performance tracking, and cross-agency collaboration. | Classifications/Predictions of matching records | Classifications/Predictions of matching records | |||||||||||||||||||||
| National Aeronautics And Space Administration | HQ: Headquarters | NASA-741 | Automated Data Normalization for Institutional Records | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This AI use case aims to automate the cleanup, transformation, and normalization of large institutional datasets that feed into NASA’s enterprise Salesforce platform. Currently, preparing these records requires time-consuming manual work to merge and standardize disparate data sources before they can be reliably used in NASA systems. By leveraging AI and machine learning, this solution will drastically reduce manual effort, improve data accuracy, and ensure a consistent, high-quality institutional dataset that supports external engagement, enterprise reporting, and mission readiness across the agency. | STMD has identified a strategic requirement to modernize how the agency tracks and manages interactions with academia and industry in order to promote increased collaboration, expand engagement across the innovation ecosystem, and accelerate the maturation of strategic partnerships. By automating this critical data pipeline, STMD is enabling a more connected, data-driven infrastructure that not only supports its own mission but also delivers agency-wide benefits through improved data accuracy, consistency, and accessibility across all enterprise applications within the platform. End users will benefit from a more robust, reliable, and searchable system of record, making it easier to find and associate with the correct organizations during their interactions with NASA. | Data transformations | Data transformations | |||||||||||||||||||||
| National Aeronautics And Space Administration | HQ: Headquarters | NASA-742 | Automating Data Stewardship for Temporary Account Reconciliation | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This AI use case focuses on automating the manual review and verification process for temporary “write-in” accounts submitted by external users within NASA’s agency-wide enterprise Salesforce platform. When users cannot find their organization in the existing institutional database, they submit temporary account records that require data stewards to manually research, validate, and reconcile each entry—often spending up to 50 hours per week. By leveraging AI-driven entity matching, web verification, and metadata enrichment, this solution will significantly reduce manual effort, accelerate the onboarding of new organizations, and improve data accuracy. Automating this process enhances scalability, ensures compliance with data governance policies, and improves the end-user experience across all applications using the enterprise platform. | Given the large number of individuals and institutions that show interest in collaborating with NASA, we must ensure that records associated with specific entities remain clean, accurate, and traceable in order to enhance our ability to manage partnerships, track engagement trends, and support strategic outreach across the agency. STMD is championing this automation effort to streamline the reconciliation of temporary accounts, which is essential for maintaining the integrity of NASA’s enterprise-wide contact and organization data. This work not only supports STMD’s mission to accelerate the development and adoption of transformative space technologies through broader collaboration but also strengthens the agency’s ability to scale engagement efficiently and responsibly across academia, industry, and government. | Deduplication | Deduplication | |||||||||||||||||||||
| National Aeronautics And Space Administration | HQ: Headquarters | NASA-743 | Ask Tech Port | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Ongoing development and enhancement of the NASA Technology Portfolio Management System (TechPort) "AskTechPort" generative AI tool. AskTechPort leverages the TechPort data set of over 18,000 current and historical NASA applied research and experimental development investments to answer questions about NASA's technology portfolio and capabilities. AskTechPort provides an interactive feature enabling users to inquire about NASA technology investment areas, maturity of those technologies, organizations developing those technologies, and where work is being performed. | TechPort sees an average of over 12,000 unique users each month. These users conduct over 250,000 searches each year of the NASA technology portfolio. The benefit of "Ask Tech Port" is a component that reduces the total time to search or retrieve the results of a data inquiry by 50%, providing interactive features that use natural language processing to provide a faster inquiry mechanism than historical faceted search techniques. | Interactive responses | Interactive responses | |||||||||||||||||||||
| National Aeronautics And Space Administration | HQ: Headquarters | NASA-744 | Enhancements to T-Rex and D-Rex | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Enhancement and further training of the NASA Technology Taxonomy and Target Technology Destination recommendation systems, known as T-Rex and D-Rex respectively. T-Rex and D-Rex are predictive AI models residing in the NASA TechPort system that help technologists more efficiently categorize technology using the NASA Technology Taxonomy and Target Technology Destinations. These models help ensure completeness of the categorization of technologies in the portfolio, and are used as verification and validation tools. | The NASA TechPort system catalogs over 2,000 technology investments made by the Agency each year. Technologists spend a significant amount of time manually categorizing technology project investments according to the NASA Technology Taxonomy and Target Technology Destinations. The benefit to the american public is improved model accuracy by more than 8% and the saving of over 330 hours of manual labor each year. | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | OSMA: Office of the Chief Safety & Mission Assurance | NASA-746 | AI-enabled Risk-Informed Assessment Selection (RIAS) tool | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Other (use other text field) | Pilot | c) Not high-impact | Not high-impact | Generative AI | This project leverages LLM-based tools to automate data ingestion, risk trend detection and scoring, and compliance determination across a variety of NASA applicable data. The innovative approach aims to streamline risk identification and prioritization while positioning our team at the forefront of AI-enabled risk management practices. | Leverage a wide range of data to identify risk trends and prioritize areas for assessment and risk reduction. | Prioritized assessment candidates, potential areas of mission risk. | b) Developed in-house | No | Prioritized assessment candidates, potential areas of mission risk. | Lessons learned data, mishap data, risk data, assessment data, project data. | No | Yes | ||||||||||||||||
| National Aeronautics And Space Administration | OSMA: Office of the Chief Safety & Mission Assurance | NASA-747 | AI-Assisted Analysis Across the IV&V and Software Assurance Lifecycle | a) Pre-deployment – The use case is in a development or acquisition status. | Other (use other text field) | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Leverage opportunities to automate overly manual analysis tasks, with a goal of amplifying productiivity and efficiency, allowing employees to free up their time for more complex tasks. | Reduce false positives in static analysis, trace design to code, identify issues, generate comments, requirements/design traceability, gap detection, change impact analysis, completeness/correctness checks, Generate edge/off-nominal test cases, identify missing tests, assess requirement flow-down.. | Insights into potential software defects and anomolies, impacts of vulnerabilities on flight assets, better understanding of software complexities and risks to focus IV&V and software assurance services. | Insights into potential software defects and anomolies, impacts of vulnerabilities on flight assets, better understanding of software complexities and risks to focus IV&V and software assurance services. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-748 | Module for Event Driven Operations on Spacecraft (MEDOS) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | MEDOS, a flight-tested onboard decision engine | Event Driven operations (MEDOS) is agentic-focused, requiring little to no training data (compared with ML and RL methods), instead focusing on using existing subject matter expertise in complement with state-of-the-art intelligent systems. | Event Driven operations (MEDOS) is agentic-focused, requiring little to no training data (compared with ML and RL methods), instead focusing on using existing subject matter expertise in complement with state-of-the-art intelligent systems. | Event Driven operations (MEDOS) is agentic-focused, requiring little to no training data (compared with ML and RL methods), instead focusing on using existing subject matter expertise in complement with state-of-the-art intelligent systems. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-749 | Remote Coordination, Actuation, and Planning (ReCAP) Cooperative MultiAgent Architecture | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | ReCAP provides an architecture for lightweight, efficient coordination of highly-capable agents in a comms-limited environment. By providing agents with high-level, lightweight information about the capabilities of other agents in the fleet, ReCAP can facilitate allocation of tasks that are dynamically discovered and assigned via an auction. Auctions are conducted for particular tasks, and agents formulate bids based on encoded subject matter expertise predicting how a given action will affect their operations. For example, one agent may request that two others bid on a sampling task. The two agents bidding may possess different instruments for sampling, one of them able to sample remotely, but at low fidelity, the other able to sample at high fidelity but requiring a physical sample be collected. The auctioneer agent will consider its own mission goals and award the task to what it deems to be the highest bid. Onboard single agents, ReCAP provides a planning/scheduling module and a behavior tree for real-time control. The planning/scheduling module includes the actions of other connected agents, so that any agent may form a plan sequence involving actions from multiple agents – this is what triggers an auction. Finally, the actions that can be executed onboard are handled by a behavior tree, which is more reactive and requires less frequent reconfiguration than a static planner. | Compared to existing multi-agent control systems, ReCAP requires no training data, before or during operations, and does not require centralized control. It was conceived alongside a NASA push for extensible mission architectures, intended to provide technology that would allow missions to interoperate as new assets are added or encountered, and to enable on-demand coordination between otherwise separate missions. ReCAP enables this capability by basing coordination on high-level, lightweight communications, and its auctioning system prevents the need for interchange of complex state information between agents. ReCAP has been deployed to success in field, laboratory, and simulated use cases across terrestrial and space applications. | Through communications interfaces, ReCAP outputs (as well as receiving) state and auction information from other agents. The behavior tree additionally outputs control commands at a level tunable to the given use case; ReCAP has been implemented with direct vehicle control, commanding velocity and position changes, as well as at a much higher level, giving commands to an existing onboard controller. | Through communications interfaces, ReCAP outputs (as well as receiving) state and auction information from other agents. The behavior tree additionally outputs control commands at a level tunable to the given use case; ReCAP has been implemented with direct vehicle control, commanding velocity and position changes, as well as at a much higher level, giving commands to an existing onboard controller. | |||||||||||||||||||||
| National Aeronautics And Space Administration | OPS: Office of Protective Services | NASA-753 | Intelligent Camera Analytics for Security and Operations | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Traditional video surveillance requires continuous human monitoring, which is resource-intensive and prone to missed events. By applying AI-driven camera analytics, this use case addresses the need for faster, more reliable detection of security incidents and operational insights. The purpose is to enhance situational awareness, reduce manual effort, and improve response times while also enabling organizations to repurpose video data for business and safety improvements. | By leveraging AI-driven video surveillance, organizations can significantly improve efficiency and security outcomes. Automated monitoring reduces the need for constant human oversight, lowering operational costs while also decreasing the likelihood of missed incidents. Faster and more accurate detection of anomalies—such as unauthorized access, unattended objects, or unusual behavior—enhances safety for personnel and protects critical assets. In addition, video analytics provide valuable secondary benefits, including occupancy tracking, traffic flow analysis, and facility utilization insights that support data-driven decision-making. These capabilities enable scalability without requiring proportional increases in staffing, delivering measurable return on investment through reduced false alarms, quicker response times, improved compliance with safety protocols, and optimized use of resources. | The AI system generates real-time outputs such as object detections, activity classifications, and anomaly alerts. These may include identifying people, vehicles, or objects of interest; detecting unusual behaviors like loitering or perimeter breaches; and flagging safety or security incidents for operator review. The system can also produce analytics dashboards and reports, summarizing trends such as occupancy levels, traffic flow, and space utilization, which support both security response and operational decision-making. | 01/01/2016 | a) Purchased from a vendor | Milestone, Genetec, Verkada, Lenel | Yes | The AI system generates real-time outputs such as object detections, activity classifications, and anomaly alerts. These may include identifying people, vehicles, or objects of interest; detecting unusual behaviors like loitering or perimeter breaches; and flagging safety or security incidents for operator review. The system can also produce analytics dashboards and reports, summarizing trends such as occupancy levels, traffic flow, and space utilization, which support both security response and operational decision-making. | Since this use case leverages commercial off-the-shelf products, training and fine-tuning of the AI models are performed by the vendors themselves. Milestone, Genetec, and Verkada typically rely on large, vendor-curated datasets of video footage that include a wide range of environments, objects, behaviors, and lighting conditions. These datasets are used to train computer vision models for object detection, classification, activity recognition, and anomaly detection. Vendors also apply continuous model evaluation and updates using customer feedback and aggregated, anonymized performance data to improve accuracy, reduce false positives, and adapt to evolving security scenarios. Our role is focused on deployment and operational use of these capabilities rather than direct model training. | No | No | ||||||||||||||
| National Aeronautics And Space Administration | ARC: Ames Research Center | NASA-764 | Space Precision Health Foundation Model | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This project aims to leverage data available within the NASA Open Science Data Repository to train a model or set of models that can predict patterns related space health outcomes. | These models developed will enable the scientific community to better understand how biological systems respond to spaceflight stressors and inform mission planners. | This system will be able to generate predictions related to space precision health. For example, it could predict possible drug targets or cellular states. | This system will be able to generate predictions related to space precision health. For example, it could predict possible drug targets or cellular states. | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-775 | Foundation Model for Lunar Science | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Other | The purpose of Lunar FM is to overcome the limitations of traditional, task-specific Machine Learning (ML) models in analyzing the vast, diverse, and long-term datasets collected by the Lunar Reconnaissance Orbiter (LRO) mission. This includes addressing data challenges such as heterogeneity in spatial and temporal resolution, differences in data formats and calibration standards, and the significant lack of large, accurately labeled datasets (sparse ground truth) for supervised learning. | The LRO Foundation Model (FM) is useful because it offers key advantages over traditional ML: Generalization: It can learn general-purpose representations that generalize better across different tasks and even data from different instruments with minimal fine-tuning. Low-Data Regimes: It excels in transfer learning, allowing knowledge from pretraining on massive unlabeled datasets to be effectively applied to downstream tasks that only have limited labeled data. Efficiency: It significantly reduces the dependence on time-consuming manual data labeling by using self-supervised pretraining on vast amounts of unlabeled LRO data. Scientific Advancement: It enables understanding of lunar features by jointly analyzing multiple data modalities, leading to new scientific insights and supporting mission-critical activities like landing site selection. | Foundation Model: Model that can be adapted for many different applications Representations: Rich, general-purpose, and robust representations of lunar surface features and spatial relationship | Foundation Model: Model that can be adapted for many different applications Representations: Rich, general-purpose, and robust representations of lunar surface features and spatial relationship | |||||||||||||||||||||
| National Aeronautics And Space Administration | JSC: Johnson Space Center | NASA-790 | JSC NC PRA Cut Set Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Probabilistic Risk Assessment (PRA) identifies minimal cut sets - smallest combination of simultaneous failures to cause a system or mission failure. | Probabilistic Risk Assessment (PRA) identifies minimal cut sets - smallest combination of simultaneous failures to cause a system or mission failure. | Probabilistic Risk Assessment (PRA) identifies minimal cut sets - smallest combination of simultaneous failures to cause a system or mission failure. | Probabilistic Risk Assessment (PRA) identifies minimal cut sets - smallest combination of simultaneous failures to cause a system or mission failure. | |||||||||||||||||||||
| National Aeronautics And Space Administration | JSC: Johnson Space Center | NASA-791 | JSC SMA NT Paper WAD Data Extraction | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This application aims to provide the Quality Flight Equipment Division with a strong paper Work Authorization Document (WAD) data extraction capability using AI models. The data inputs include scanned PDF versions of original paper Work Authorization Documents (WADs) stored in the QFED data center. This application will be leveraging Google's Gemini Pro models to accomplish this effort. Users will be able to query critical quality-based information from large numbers of WADs to better understand Task Performance Sheet and Discrepancy Report execution trends and areas for improvement. | This application aims to provide the Quality Flight Equipment Division with a strong paper Work Authorization Document (WAD) data extraction capability using AI models. The data inputs include scanned PDF versions of original paper Work Authorization Documents (WADs) stored in the QFED data center. This application will be leveraging Google's Gemini Pro models to accomplish this effort. Users will be able to query critical quality-based information from large numbers of WADs to better understand Task Performance Sheet and Discrepancy Report execution trends and areas for improvement. | Information on scanned PDF documents. | Information on scanned PDF documents. | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-797 | AI Tool for Automatic Identification of Soliton Features in SWOT data | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Automatically detect features of Soliton for scientists to study the ocean. | Must be faster than manual analysis of the SWOT data. | Predictions. | Predictions. | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-798 | AI Based Digital Twin Interoporability Schema | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Reinforcement Learning | Using IEEE 2874 Spatial Web standard to implement a new Hyperspace Modeling Language (HSML) and Hyperspace Transaction Protocol (HSTP) to establish communications among heterogeneous simulation platforms. | Making collaboration of digital twins possible in different platforms. | Recommandation list. | b) Developed in-house | No | Recommandation list. | Omniverse and Unity digital twin objects data are uesed. | No | Yes | ||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-803 | OCO2/3 Bad Pixel Map | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A machine learning approach is developed to improve the bad pixel map that masks damaged or unusable pixels in the imaging spectrometers of the Orbiting Carbon Observatory-2 and -3. | Improved science results. | Improvements in the data. | Improvements in the data. | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-804 | SpecTF Clouding Screening for Imaging Spectroscopy | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A deep learning model for accurate, data-driven cloud detection in imaging spectroscopy data. | Improved science results. | Learns features of the data. | Learns features of the data. | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-805 | Decision Theoretic Planning | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Algorithms to plan and optimize different outcomes | Increase autonomy and efficiency. | Algorithms to plan and optimize different outcomes | Algorithms to plan and optimize different outcomes | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-806 | Cloud avoidance for Atmospheric Retrieval Missions | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Methods for atmospheric retrieval of earth science data. | Improved science results. | Methods for atmospheric retrieval of earth science data. | Methods for atmospheric retrieval of earth science data. | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-807 | Dynamic, Intelligent, Tomographic Imaging | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Methods for analyzing 3D imaging data | Improved methods for data understanding. | Methods for analyzing 3D imaging data | Methods for analyzing 3D imaging data | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-808 | Adversarial Policy Evolution via AI | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Reinforcement Learning | Develop and improve decision-making strategies by having them compete against adversaries, learning and evolving through repeated challenges. | Improved methods. | Learning algorithm. | 01/09/2021 | b) Developed in-house | No | Learning algorithm. | Mission engineering data. | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-809 | Adaptive Problem Solving / Hyperparameter Optimization | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Adaptable algorithms to analyze hyperspectral data | Improved science analysis. | Learns features of the data. | 01/09/2024 | b) Developed in-house | No | Learns features of the data. | Earth Science hyperspectral data. | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-811 | Onboard Science Instrument Autonomy (OSIA) for OWLS | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Onboard biosignature detection for a suite of life-detection instruments (motility, fluorescence, metabolism indicators) and summarizing the data to overcome bandwidth constraints (e.g., at Enceladus or Europa). | Algorithms successfully identified biosignatures during a field test at Mono Lake, CA and achieved compression ratios of 400-1600x on 3 instruments. | Probability that the data contains a biosignature and a summarized/compressed version of the data (for transmission over interplanetary distances) | Probability that the data contains a biosignature and a summarized/compressed version of the data (for transmission over interplanetary distances) | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-812 | EMIT Methane Plume Detection | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Machine learning based detection of methane plumes from imaging spectroscopy data. | Machine learning based detection of methane plumes from imaging spectroscopy data. | methane plumes | methane plumes | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-813 | Science Autonomy for NEAScout | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | Autonomy software to support on board AI science capabilities for the Near Earth Asteroid Scout Mission. | Autonomy software to support on board AI science capabilities for the Near Earth Asteroid Scout Mission. | Autonomy software to support on board AI science capabilities for the Near Earth Asteroid Scout Mission. | Autonomy software to support on board AI science capabilities for the Near Earth Asteroid Scout Mission. | |||||||||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-814 | Time Series Forecasting, Evaluation and Deployment (Time-FED) | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Time-FED is a machine learning system for Time series Forecasting, Evaluation and Deployment. TimeFED was created in response to the following data realities: 1) data contains significant gaps (sometimes on the order of months or years) due to sensor outages, 2) data are not sampled at uniform rates, 3) time series data can be in stream or track form. JPL has built an infrastructure for time series prediction and forecasting that respects these realities. | Time-FED is a machine learning system for Time series Forecasting, Evaluation and Deployment. TimeFED was created in response to the following data realities: 1) data contains significant gaps (sometimes on the order of months or years) due to sensor outages, 2) data are not sampled at uniform rates, 3) time series data can be in stream or track form. JPL has built an infrastructure for time series prediction and forecasting that respects these realities. | Time-FED outputs both predictions and forecasts. Because Time-FED has been applied to many problems related to extreme events and transient science, Time-FED also finds novel or anomalous events. | 01/09/2021 | b) Developed in-house | No | Time-FED outputs both predictions and forecasts. Because Time-FED has been applied to many problems related to extreme events and transient science, Time-FED also finds novel or anomalous events. | time series data | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | JPL: Jet Propulsion Laboratory | NASA-815 | AI Risk-aware task and motion planning for snake-like robots in icy enviroments | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Enables task and navigation planning inextreme ice terrains/environments. | Time-FED provides time-savings for persons wishing to prepare data for and use ML models. For example, Time-FED has been used on ML applications applied to GNSS data stored in ESDR archives. | Activity Plan/Schedule and with velocity commands for motion tasks | 01/09/2022 | b) Developed in-house | No | Activity Plan/Schedule and with velocity commands for motion tasks | Time-FED is compatible with many datasets including both univariate and multivariate time series datasets. | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-824 | The AQcGAN Air Quality Emulator for GEOS | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | AQcGAN is an air quality emulator for surface O3 and NOx concentrations. | AQcGAN is an air quality emulator for surface O3 and NOx concentrations. | The emulator utilizes a convolutional generative adversarial network (cGAN) to predict sequential time steps of selected air quality tracers. | The emulator utilizes a convolutional generative adversarial network (cGAN) to predict sequential time steps of selected air quality tracers. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-825 | The AIMFAHR Project | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Artificial Intelligence (AI) techniques, particularly Machine Learning (ML), have undergone significant growth in heliophysics research in recent years. Various ML models have emerged, some outperforming empirical and physics-based models while significantly reducing computational time. The Artificial Intelligence Modeling Framework for Advancing Heliophysics Research (AIMFAHR) project is an initiative aimed at integrating community-wide AI efforts into a unified AI modeling framework, advancing system-of-systems science in Sun-Earth interactions and enhancing the predictability of space weather hazards. We selected data-driven models of the magnetosheath, cusps, auroral precipitation, field-aligned currents (FACs), ionospheric electrodynamics, and thermospheric density as the initial set of AIMFAHR base models. We simulated geomagnetic storms on 4 Jan 2023, 6 May 2023, and 11 May 2024, selected by the Machine Learning-based Geospace Environment Modeling (MLGEM) resource group at the Geospace Environment Modeling (GEM) workshop. | These initial efforts provide valuable insights for future AIMFAHR activities, including ML model coupling, knowledge transfer between models, uncertainty quantification, and research-to-operation transitions. | The AIMFAHR models reveal the storm responses of various geospace systems from a data-driven perspective, including the spatiotemporal variation of the magnetopause reconnection line and its global dayside reconnection rate; cusp motions and the evolution of cusp ion energy dispersions; auroral boundary motions and variation in global auroral spectrums; increases in FACs and ionospheric potentials; and enhanced Joule heating in the upper atmosphere. | The AIMFAHR models reveal the storm responses of various geospace systems from a data-driven perspective, including the spatiotemporal variation of the magnetopause reconnection line and its global dayside reconnection rate; cusp motions and the evolution of cusp ion energy dispersions; auroral boundary motions and variation in global auroral spectrums; increases in FACs and ionospheric potentials; and enhanced Joule heating in the upper atmosphere. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-827 | Qualitative Evaluation of Foundation Models (QEFM) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Objective: Quantify the performance of Foundation Models (FMs) for weather and climate to guide GSFC scientists in effectively integrating AI into their research. | We now have a functioning framework for rapidly evaluating current and future Foundation Models available to our SMEs, which will be available to GSFC scientists over the summer. | This work sets benchmarks for comparisons of FM performance out-of-the-box. Future work will assess performance for downstream tasks (fine-tuning). | This work sets benchmarks for comparisons of FM performance out-of-the-box. Future work will assess performance for downstream tasks (fine-tuning). | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-829 | System for Uncertainty, Risk, and Feature Assessment of Surfaces (SURFAS) Holistic Scene Analysis | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | SURFAS, given the above combination of inputs for a particular vehicle, is informed by expertise of mission designers to fuse relevant data together in order to create a holistic map of the scene, combining all information that the agent can perceive about that scene; e.g. the hyperspectral imager may autonomously tag identify an area of geological interest to the mission, Using data from a LiDAR scan, the robot can plan a path to reach the area. That path can be modified to avoid a patch of soil that the ground penetrating radar identifies as soft and loose (and so should be avoided by the robot). SURFAS intends to provide an architecture for uniting these telemetry streams into a single scene map that is made available to onboard planners to account for both risk AND reward in planning. | SURFAS intends to provide an architecture for unifying onboard systems that are often siloed with limited data sharing. Particularly, science instruments can provide relevant traversal information, especially in novel environments encountered on planetary surfaces. We want to avoid leaving data on the table and instead use all available telemetry to inform the motion and mission planning for next-generation robotic explorers. | Map “layers” of the scene by individual features (e.g. a layer containing only LiDAR and one containing only GPR), as well as a fused map, where each individual layer measurement is translated to a measure of mission risk versus reward, and combined with measurements from other layers. The final data product is analogous to a costmap used in terrestrial robotic planning, but encompassing all available aspects of the scene. | Map “layers” of the scene by individual features (e.g. a layer containing only LiDAR and one containing only GPR), as well as a fused map, where each individual layer measurement is translated to a measure of mission risk versus reward, and combined with measurements from other layers. The final data product is analogous to a costmap used in terrestrial robotic planning, but encompassing all available aspects of the scene. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-830 | Explainable Machine Learning Methods for Ocean Worlds Mass Spectrometry Data: Biosignatures and Environmental Characterization | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Future astrobiological and geochemical investigations of ocean worlds (OWs) such as Europa and Enceladus will face challenges that can be addressed through science autonomy. While ML methods are potentially powerful tools for the prediction of geochemistry and biosignatures using IRMS data, many of these models are “black boxes” that reduce trust in predictions, and interpretable ML tools are needed to instill trust. In addition, methods are needed to diagnose false predictions for high-stakes predictions like extraterrestrial life. | We develop and validate an interpretable ML local variable importance tool called Local Nearest-neighbors Projected Distance Regression (local-NPDR) that improves the explainability of ML models for biosignatures and OW chemistry using real IRMS measurements of volatile CO2 from OW analogue brines and simulated data. We hypothesize that false predictions may be identified when the signs and magnitudes of local and global variable importance scores differ. We add local-NPDR false prediction diagnostics to our interpretable ML algorithms that include global-NPDR feature selection and network visualization of globally-important variables. Together these NPDR-based tools add interpretability to ML models with the ability to detect biosignatures and characterize the environment with respect to pH, CO2 concentration and salt content. Such interpretable ML methods will be important for the implementation of science autonomy for future missions to study plumes of OWs and evaluate their habitability and geochemistry. | One important example is the use of machine learning (ML) models for the prediction of molecular biosignatures (signals of life) from isotope ratio mass spectrometry (IRMS) data on OW orbiters. | One important example is the use of machine learning (ML) models for the prediction of molecular biosignatures (signals of life) from isotope ratio mass spectrometry (IRMS) data on OW orbiters. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-831 | Transiting Exoplanet Survey Satellite (TESS) Neural Network (NN) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | The Transiting Exoplanet Survey Satellite (TESS) is a NASA mission focused on exploring and finding exoplanets around nearby stars using the transiting method. TESS telescope covers a large field of view of 96 sq. deg in a single exposure. It has four cameras arranged vertically pointing from the ecliptic plane toward the poles. | Thanks to this configuration and observing schedule, TESS is able to observe asteroids with a high duty cycle. Current techniques to search for asteroid signals on images rely on the shift-and-stack method, which relies on testing all possible combinations of direction and speed an object can move across the image to maximize the detection signal and find the asteroid’s track. This method is computationally expensive, and only attainable when the parameter space (direction-velocity) is constrained, usually to the main direction (e.g. orbits parallel to the ecliptic plane) and low speeds (main belt asteroids). This introduces a bias against fast-moving asteroids and high-inclination orbits (vertical tracks). To solve this, we implemented a rotationally invariant neural network (NN) model that performs semantic segmentation to find moving objects in TESS FFIs.We constructed a custom training set using 64x64x64 cubes of pixel flux time series and truth masks with the tracks of known asteroids from the JPL Horizon Ephemeris system. Our NN model can find known and new asteroids with all kinds of track orientations, showing no bias against objects moving at high inclination orbits, or fast-moving asteroids, or tracks with a change in direction. This NN model detects ~90% of known asteroids down to apparent visual magnitude 20th and has a detection limiting magnitude of ~20.5. This is on par with current implementations of the shift-and-stack method but without the bias introduced by limiting the range of track direction and velocity. | This NN has an architecture that uses two 3D U-Nets stacked (W-Net) with skip connections that output a 3D segmentation mask with asteroid detections. We will introduce the NN model and present results from predictions using years 1 and 2 of TESS data. Additionally, we will show preliminary light curves extracted from new asteroids detected by our model. | This NN has an architecture that uses two 3D U-Nets stacked (W-Net) with skip connections that output a 3D segmentation mask with asteroid detections. We will introduce the NN model and present results from predictions using years 1 and 2 of TESS data. Additionally, we will show preliminary light curves extracted from new asteroids detected by our model. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-832 | RAG Chatbots: Enhancing User and Internal Support Through Dual Knowledge Systems | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This work presents two complementary RAG-based chatbot systems developed for NASA's Community Coordinated Modeling Center. These tools represent practical applications of retrieval-augmented generation to enhance both external user support and internal team productivity. | By implementing efficient document processing pipelines and strategic context assembly, both systems demonstrate significant response time improvements while maintaining information boundaries between public and private knowledge domains. | The public-facing chatbot provides quick answers about CCMC tools, services, and general heliophysics information, optimized for speed and accuracy when addressing common user queries. | The public-facing chatbot provides quick answers about CCMC tools, services, and general heliophysics information, optimized for speed and accuracy when addressing common user queries. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-833 | Machine Learning Techniques for Fast Radiative Transfer | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Radiative transfer models for satellite data assimilation and physical atmospheric retrievals need to be both fast and accurate to fulfill operational constraints. These contradicting requirements have lead to the development of several algorithms specifically for this purpose. Machine Learning offers a new set of tools that can be used for this application and this presentation will discuss implementation and results of deep learning in this context. | Radiative transfer models for satellite data assimilation and physical atmospheric retrievals need to be both fast and accurate to fulfill operational constraints. These contradicting requirements have lead to the development of several algorithms specifically for this purpose. Machine Learning offers a new set of tools that can be used for this application and this presentation will discuss implementation and results of deep learning in this context. | Radiative transfer models for satellite data assimilation and physical atmospheric retrievals need to be both fast and accurate to fulfill operational constraints. These contradicting requirements have lead to the development of several algorithms specifically for this purpose. Machine Learning offers a new set of tools that can be used for this application and this presentation will discuss implementation and results of deep learning in this context. | Radiative transfer models for satellite data assimilation and physical atmospheric retrievals need to be both fast and accurate to fulfill operational constraints. These contradicting requirements have lead to the development of several algorithms specifically for this purpose. Machine Learning offers a new set of tools that can be used for this application and this presentation will discuss implementation and results of deep learning in this context. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-836 | Using ChatGSFC to Streamline Analysis and Reporting | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | A series of 90+ examples of how ChatGSFC or a NASA-focused LLM can be utilized to enhance project planning and controls (PP&C) analysis and streamline project management activities. | Small working groups in various functional areas (EVM, Risk, Resource Management, Schedule) have been experimenting with ChatGSFC and documenting time-savings and efficiencies gained from using the AI/LLM tool. | Cost, schedule, risk analyses, resource estimation, resource requirements, subcontracts analysis, ability to generate a multitude of various project plans and sub-plans, WBS/WBS dictionary | Cost, schedule, risk analyses, resource estimation, resource requirements, subcontracts analysis, ability to generate a multitude of various project plans and sub-plans, WBS/WBS dictionary | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-838 | Ground-based Detection of Martian Dust Devils With a Fine-tuned Fast R-CNN | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | We developed a two-stage pipeline for efficient dust devil detection in Mars rover imagery. Our approach combines preprocessing filters to remove unsuitable images, followed by a Faster R-CNN with ResNet-50 backbone and Feature Pyramid Network, effectively detecting nonrigid, low-opacity dust devils that traditional methods frequently miss. We demonstrated clear advantages over generic object detection models. | Our fine-tuned model significantly outperforms general-purpose architectures, highlighting the critical importance of domain-specific training for specialized atmospheric phenomena detection across multiple rover platforms and mission phases. This work has established a foundation for onboard implementation in future Mars missions. Our model could enable intelligent data prioritization, allowing rovers to retain high-resolution imagery of dust devil activity while applying aggressive compression to less scientifically valuable frames, optimizing limited bandwidth resources. | Detecting nonrigid, low-opacity dust devils | Detecting nonrigid, low-opacity dust devils | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-839 | A Detection and Reporting System for Spacecraft Threats | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Goal is to identify, and potentially predict when an event (e.g., anomaly, interference, etc.) has/will occur. Spacecraft anomalies / interference events can take them out of of mission. | Shorten anomaly/event response times, potentially automating responses; Faster identification of anomalies, and even prediction of events. Keeps spacecraft on mission longer. | list of identified events (type, time, location, etc.) provided to engineering/operations. We are very conscious to drive false positives and operator workload to the absolute minimum. | list of identified events (type, time, location, etc.) provided to engineering/operations. We are very conscious to drive false positives and operator workload to the absolute minimum. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-841 | Neural Posterior Estimation for X-ray reflection spectroscopy: Training on complex physical models and AGN Observation-Driven Parameter Grids. | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The development of an automated inference tool tailored to extract key physical parameters from obscured AGN (active galactic nuclei) X-ray spectra by means of more complex physical models than ever before with machine learning. For our pilot work, we use the decoupled X-ray reflection MYTorus model with separate direct and scattered ("reflected") continua, as well as narrow Fe K fluorescense. | Such a complex model poses a significant computational challenge for traditional inference techniques. To address this, we construct a physically informed, observation-driven training grid, based on the parameter space spanned by nearby AGN observed with NuSTAR. We use this grid to train a Neural Posterior Estimation (NPE) model within a simulation-based inference (SBI) context. The parameters inferred are the photon index (Γ), the global and line-of-sight equivalent neutral hydrogen column densities (N_Hs and N_Hz), and the reflection scaling factor (A_S), each with associated uncertainties. | This approach demonstrates a path to likelihood-free posterior estimation using neural networks, providing a scalable alternative to traditional methods for parameter inference in complex astrophysical models. | This approach demonstrates a path to likelihood-free posterior estimation using neural networks, providing a scalable alternative to traditional methods for parameter inference in complex astrophysical models. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-842 | Revolutionizing Neutron Star Parameter Inference through Machine Learning | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Observations of neutron stars provide estimates of their mass and radius—key parameters for constraining their still uncertain equation of state. However, the accuracy of parameter inference is limited by the complexity and computational cost of current models, with more accurate models becoming prohibitively expensive. To make parameter inference feasible, we have developed a transposed convolutional neural network (NN) that serves as a surrogate for the physics-based model within our MCMC algorithm. We test and validate this approach in a simple static vacuum regime using millisecond pulsar PSR J0030+0451 as a case study. The NN achieves a speed-up of over 400× which enables the algorithm to converge on a solution for the first time. We outline our progress towards incorporating more realistic and complex regimes, where the neural network becomes an increasingly vital component of the inference process. | The developed neural network surrogate speeds up the computation of model pulsar X-ray and gamma-ray light curves within Markov chain Monte Carlo and multi-nested suites by a factor of ~400 for vacuum magnetic fields and ~1,000,000 for realistic force-free magnetic fields. This allows the derivation of posterior parameter distributions that otherwise would be impossible. | Posterior distributions | Posterior distributions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-843 | Curiosities of a Systems Engineer | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | AI has helped enable me as a systems engineer to dive deep into subjects outside my expertise. Helping bridging the gap between specialists and generalists. This includes helping me identifying what appears to be an optimal 5 node constellation for a GNSS like constellation around the Moon focused on the Lunar South Pole. Which happens to be the most optimal 6 node constellation with one node removed. Another project has been an auditory "game" demonstrating how the brain builds correlations between different signals played in the separate ears. This demonstration uses an auditory rendition of the GPS gold codes and builds an intuition on phase offset and doppler offsets. The last "ongoing" side project has been using AI to create a software GNSS receiver leveraging the intuitions gained from auditory demo. I have learned a few prompting tricks along the way that I would like to share. | TEMPO spectra seem to contain enough information to predict GPP as accurately as the combination of MODIS and MERRA-2 data, despite the absence of infrared information. | GPP ( Gross Primary Productivity ) across entire TEMPO footprint for all daylight hours | GPP ( Gross Primary Productivity ) across entire TEMPO footprint for all daylight hours | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-844 | Continuing Adventures in the Discovery of Multiple Star Systems with AI/ML | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Early data-driven analyses of ozone chemistry sensitivity primarily relied on "ratio-based" indicators to partially linearize the non-linear aspects of urban ozone chemistry, which are influenced by pollution levels, light, and water vapor. With the development of more sophisticated algorithms, including machine learning techniques capable of fitting high-dimensional non-linear functions, we have shown that a highly effective parameterization of net ozone production rates (PO3) can be achieved. This approach eliminates the need for empirical linearization of ozone chemistry through various indicators and allows for the primary inputs to be accurately constrained using satellite observations. | We need a better understanding of the worldwide spatiotemporal variability of ozone production rates. This is mainly due to the limited information we can gain from supersites or aircraft data by which we can generate observationally constrained PO3. However, we offer a significantly enhanced algorithm to parameterize PO3 using retrospective aircraft observations and a handful of variables that can be primarily informed by satellite observations that provide high spatial coverage. Our work shows the long-term maps of PO3 worldwide. This work has important implications for pollution exposure and regulations and can promote the use of satellite observations as an essential component for informing emission regulations beyond the current capabilities. | All outputs related to the project are well documented in our website: https://www.ozonerates.space/ | All outputs related to the project are well documented in our website: https://www.ozonerates.space/ | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-845 | Autonomous Science and Technology for Responsive Adaptation (ASTRA) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The ASTRA team is working to develop and mature capabilities for extensibility and science autonomy. Extensibility would give us the ability to add and collaborate among multi-organizational assets to form a new, distributed and disaggregated mission. Our onboard Science AI would allow missions to make decisions for future action onboard using real-time data, enabling adaptable and responsive exploration and discovery. | Intelligent and interoperable extensible architectures for space developed and deployed under ASTRA are poised to become a “get with it or get out” technology, similar to the protocols for the internet, bluetooth, etc. We utilize two key capabilities developed by the ASTRA team in FY25: 1) Intelligent Extensible Mission Architectures (IEMA) and 2) Objective-Based Artificial Intelligence (OBAI). IEMA code includes the ability to broadcast capabilities and requests, and to join in a coordinated effort with assets that were previously unknown. OBAI code includes the ability to use established observational priorities to analyze data in real-time, make recommendations and decisions for future action based on the perceived importance of observation(s), and carry out actions in a new, coordinated multi-asset effort. | classification, prediction, confidence in prediction, anomalies | classification, prediction, confidence in prediction, anomalies | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-846 | Multimodal Earth Observation Workflow for Machine Learning (MEOW-ML): A Case Study in Canopy Height Model and Canopy Height Change Prediction | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Here we introduce an updated version of Multimodal Earth Observation Workflow for Machine Learning (MEOW-ML), an end-to-end data fusion and artificial intelligence (AI) and machine learning (ML) framework tailored for Earth Observation (EO). MEOW-ML supports the full AI/ML lifecycle, from data preparation to model training and evaluation. | It speeds up the iterative loops of data processing and model architecture development, and, critically, enables the integration of heterogeneous data sources. A primary intent for MEOW-ML is its application in the design of New Observing Strategies (NOS) which require very large data sets with data of diverse types acquired by a variety of sensors potentially aboard multiple sub-orbital and orbital platforms. MEOW-ML can be used to test various combinations of data types, qualities and resolutions to optimize the design of an NOS. | As an example, we present a trained ML model with promising performance using this framework, for predicting forest productivity and degradation over time by using canopy height change (CHC) as a proxy. We conducted modeling at two spatial resolutions to illustrate the potential use of this framework for NOS design activities. | As an example, we present a trained ML model with promising performance using this framework, for predicting forest productivity and degradation over time by using canopy height change (CHC) as a proxy. We conducted modeling at two spatial resolutions to illustrate the potential use of this framework for NOS design activities. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-847 | Sub-Saharan West Africa Land Cover Change | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | In recent decades, Sub-Saharan West Africa has seen rapid and ongoing land cover change fueled by population growth and subsequent agricultural expansion and intensification. These changes have led to negative impacts including a decrease in land productivity, loss of local biodiversity, and a general degradation of ecosystem services, resulting in debate over whether policies to discourage this type of transformation should be introduced. However, moderate resolution satellite data and traditional remote sensing methods are insufficient at resolving land cover land use change in this region, which is dominated by small, dispersed patches of savanna-woodlands and smallholder agriculture systems (< 3 ha.) that consist of highly dynamic and often ill-defined field boundaries. In Senegal, extreme latitudinal gradients in phenology, limited availability of cloud-free wet season imagery, and widespread burnt area during the dry season add further complexity in identifying sub-hectare land cover and change. | Thanks to the growing availability of very high resolution (VHR) imagery (< 3 m GSD) through commercial vendors and the increased accessibility of high-performance computing resources such as GPUs, we are now able to perform computationally-expensive, deep learning-based predictions on thousands of VHR observations for mapping fine-scale land cover over large areas. We have leveraged these enhanced capabilities by developing a series of deep learning models for land cover classification with WorldView 8-band imagery (2 m GSD), and have performed inference on all data available over the study domain. To assess the cost-benefit of this effort, we implemented a simple spatial resolution experiment at select locations in Senegal by pansharpening and resampling 2 m WorldView multispectral imagery and training data to alternate spatial resolutions (0.5 m, 5 m, 10 m, and 30 m) for training and inference using our deep learning models. | Presented here, our quantitative results evaluating the impact of spatial resolution on the accuracy of mapping agricultural expansion and tree/shrub cover in Senegal provide insight into the optimal input parameters for mapping land cover with deep learning applications. | Presented here, our quantitative results evaluating the impact of spatial resolution on the accuracy of mapping agricultural expansion and tree/shrub cover in Senegal provide insight into the optimal input parameters for mapping land cover with deep learning applications. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-848 | Near-real-time aerosol retrievals from OMPS Limb Profiler measurements | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Rapid aerosol retrievals from OMPS Limb Profiler are important to monitor large wildfires and volcanic eruptions that reach the stratosphere. These near-real-time (NRT) retrievals can inform aviation safety and help coordinate ground-based and in situ measurements of these disruptive events. To enable NRT retrievals from OMPS Limb Profiler, we utilize neural networks trained on the operational physics-based data product, reducing the runtime to retrieve these aerosols from ~2 hours to ~2 minutes. | Sped up aerosol retrievals by ~60 times to enable rapid retrievals within 3 hours from acquisition, integrated imagery into NASA Worldview for quick assessment of disruptive events (volcanic eruptions, major wildfires) alongside related data products (e.g., OMPS SO2) | Predictions of aerosol extinction profiles between 0.5 - 40.5 km at 510, 600, 675, 745, 869, and 997 nm. | b) Developed in-house | Yes | Predictions of aerosol extinction profiles between 0.5 - 40.5 km at 510, 600, 675, 745, 869, and 997 nm. | 1. OMPS LP Level-1 gridded radiance data product at selected wavelengths and altitudes 2. OMPS LP Level-2 Aerosol data product 3. Atmospheric temperature and pressure profiles from Global Earth Observing System Forward Processing for Instrument Teams data product | No | Yes | ||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-849 | Retrieving stratospheric water vapor from OMPS Limb Profiler measurements | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Stratospheric water vapor (SWV) plays an important role in atmospheric chemistry, dynamics, and radiative forcing. OMPS Limb Profiler (LP) provides daily near-global coverage and currently operates on two satellites, with plans to launch two additional platforms in the coming years. Developing a method to retrieve SWV from OMPS LP measurements would provide measurements of SWV into the 2030s. However, OMPS LP has low sensitivity to SWV which inhibits the application of traditional retrieval methods. We explored using neural networks to perform these retrievals. | With the impending loss of Aura Microwave Limb Sounder (MLS) which has provided SWV measurements for two decades, our OMPS LP product will serve as a low-cost continuation of the MLS water vapor record until a successor instruments can be developed and launched. | Each CNN predicts a water vapor profile. The mean of the predictions comprises the reported OMPS LP water vapor profile, and the standard deviation of the predictions comprises the uncertainty in that water vapor profile. | b) Developed in-house | Yes | Each CNN predicts a water vapor profile. The mean of the predictions comprises the reported OMPS LP water vapor profile, and the standard deviation of the predictions comprises the uncertainty in that water vapor profile. | 1. Aura MLS water vapor data product 2. OMPS LP Level-1 gridded radiance data product at select wavelengths 3. Atmospheric temperature and pressure profiles from the NASA Global Earth Observing System Forward Processing for Instrument Teams product | No | Yes | ||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-850 | Retrieving stratospheric NO2 profiles from OMPS Limb Profiler measurements | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Stratospheric NO2 plays an important role in ozone photochemistry. The OMPS Limb Profiler (LP) instrument provides daily near-global coverage; it is currently operating on two satellites and is planned for two additional platforms in the coming years. However, LP is only weakly sensitive to NO2 due to coarse spectral resolution at wavelengths where NO2 absorbs, which prohibits the use of traditional retrieval techniques. | Additional observations of stratospheric NO2 abundances can improve our understanding of stratospheric chemistry and dynamics. Validation of this work is complicated due to limited availability of observations, but the retrieved profiles show good agreement with state-of-the-art model simulations and the retrieved integrated column shows good agreement with space-borne measurements from Aura OMI. | Prediction of NO2 profiles between 10.5 - 45.5 km | b) Developed in-house | Yes | Prediction of NO2 profiles between 10.5 - 45.5 km | 1. OMPS LP Level-1 gridded radiance data product at selected wavelengths 2. Atmospheric temperature and pressure profiles from the NASA Global Earth Observing System (GEOS) Forward Processing for Instrument Teams product 3. NO2 profiles from the GEOS Climate Chemistry Model matched to local solar time of OMPS LP observations | No | Yes | ||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-851 | Automated identification of volcanic plumes | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Use ML to reduce noise in image of satellite SO2 retrievals, and automatically identify volcanic SO2 plumes in the images. | Automated identification and tracking of volcanic plumes will be useful in monitoring and mitigating volcanic hazards. | segmentation | segmentation | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-853 | Correcting NASA NPOL weather radar beam blockage using machine learning approaches | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Our objective is to fill in blocked radar data using Convolutional Neural Networks (CNN). This project is in the early stages in preparing the data and model for training. | This project will help fill in NASA NPOL blocked radar data from the Global Precipitation Measurement (GPM) mission Ground Validation (GV) field campaigns to better validate rainfall from space. | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-854 | Application of ML to Detection of Anomalies in Spacecraft Health and Status Data | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We are collaborating with the Magnetospheric Multiscale (MMS) mission to research Machine Learning (ML) techniques capable of predicting and detecting anomalies in spacecraft health and status data. We are combining historical MMS telemetry data, with known mission events and anomalies, to perform unsupervised ML techniques. We have primarily used Temporal Convolutional Networks (TCN) as they preserve the temporal nature of our data while detecting long term and short term trends in the data. | This research provides new insights in telemetry data patterns and may reduce the time it takes to identify anomalies, allowing operators to focus on finding resolutions to ensure spacecraft health and safety. | Detection of anomalies | 10/01/2025 | b) Developed in-house | No | Detection of anomalies | MMS and LRO telemetry data from GSFC's Telemetry as a Service (TaaS) tool | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-855 | AI: Logic Design and Verification | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Reinforcement Learning | There has been much discussion on the use of Artificial Intelligence (AI) in many fields. This study is a first look at the use of AI in the field of digital electronics, primarily generating VHDL designs as well as self-checking test benches. A series of experiments were conducted, giving the AI design and verification tasks for standard logic constructions as well as modest-sized applications. The AI’s performance was analyzed in detail. | The key benefit of using AI for these applications is the huge speed advantage for generating designs and test benches, when compared to the human. This is for NASA and organizations that do similar engineering; not for the general public. In the course of this study, some cases that would take a human a number of hours to complete, would be done in less than a minute by the AI. | The error or anomaly rate was relatively high for the AI. Indeed, it would be higher than a young engineer. Some of the anomalies were minor and could be corrected by the human or be directing the AI. Other anomalies were moderate to major; a considerable amount of human time would be necessary to fix the problems; in some cases the AI output would have to be replaced by the human. | The error or anomaly rate was relatively high for the AI. Indeed, it would be higher than a young engineer. Some of the anomalies were minor and could be corrected by the human or be directing the AI. Other anomalies were moderate to major; a considerable amount of human time would be necessary to fix the problems; in some cases the AI output would have to be replaced by the human. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-856 | SOLARIS-AI: Analyze Planetary Modulation in Solar Activity Cycles (Prediction of solar storms possible?) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This AI-driven project analyzes multi-decadal solar activity datasets to identify and quantify periodic signals that correlate with planetary orbital mechanics. The project processes large volumes of solar observation data (sunspot numbers, solar area measurements, magnetic field data) to detect cyclic patterns and their relationship to Jupiter's orbital period, planetary conjunctions, and synodic cycles. The project validates the hypothesis that solar activity (include magnetic activity and solar storms) is modulated by gravitational influences from the planetary system, particularly Jupiter's 11.86-year orbit. This project leverages AI to transform decades of solar observations into actionable insights for space exploration safety and scientific understanding of solar-planetary interactions. | Scientific Validation: Quantitative verification of planetary influence on solar activity hypothesis Enhanced understanding of solar-planetary field interactions Improved solar cycle prediction accuracy through planetary position integration Prediction of solar storms and theirs impact on Earth and on astronauts traveling to Moon and Mars? NASA/Public Benefits: Cost Savings: $2-5M annually in satellite protection through better space weather prediction Time Savings: 90% reduction in manual solar cycle analysis (from months to days) Mission Safety: Enhanced crew and equipment protection during solar maximum periods Communication Systems: Improved reliability of satellite communications and GPS ROI: 300-500% return through prevented satellite damage and improved mission planning KPIs: 95% accuracy in 11-year cycle detection, 85% accuracy in solar maximum timing prediction | Automated Analysis Results: Dominant periodicity detection with confidence intervals (11.1 ± 0.3 years) Planetary correlation coefficients and phase relationships Solar cycle phase predictions 2-5 years in advance Anomaly detection for unusual solar activity patterns Real-time updates of planetary modulation strength Automated reports flagging significant solar-planetary alignment events Risk assessments for space weather during planetary conjunctions | Automated Analysis Results: Dominant periodicity detection with confidence intervals (11.1 ± 0.3 years) Planetary correlation coefficients and phase relationships Solar cycle phase predictions 2-5 years in advance Anomaly detection for unusual solar activity patterns Real-time updates of planetary modulation strength Automated reports flagging significant solar-planetary alignment events Risk assessments for space weather during planetary conjunctions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-857 | Specifying Properties Of Dayside Magnetopause Reconnection From A Machine-Learning Model For The Earth'S Cusps | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Implementation of models of the magnetospheric CUSP using in-situ ion flux data from ESA's Cluster mission and DMSP spacecraft. The models produce 3-D ion flux distributions in response to solar wind parameters such as density, velocity and magnetic field. | Can provide information regarding dayside magnetic reconnection which is the main process that describes the energy transfer between the Sun and Earth. | Predictions of the structure of the cusp in response to solar wind parameters | Predictions of the structure of the cusp in response to solar wind parameters | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-858 | Strategic Text Augmentation & Research Synthesis for Physics Innovation Narratives & Evaluations" | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This project is an AI-assisted daily workflow solution designed to streamline the scientific documentation process for physics researchers. The system leverages ChatGSFC's advanced language processing capabilities to transform Vilem Mikula's research notes, experimental data, and scientific observations into polished monthly reports, publication drafts, and research proposals. The platform specializes in physics domain knowledge. The project automates the time-consuming documentation aspects of scientific work, allowing researchers to focus on discovery and analysis rather than report formatting and literature synthesis. | Efficiency Improvements: Time Savings: 70% reduction in report preparation time Documentation Quality: 85% increase in consistency and completeness of research documentation Publication Output: 30% increase in publication submission rate through streamlined drafting Grant Success: 25% improvement in proposal acceptance through AI-enhanced presentation Knowledge Transfer: Enhanced collaboration through better-structured communication ROI Metrics: Estimated 200+ hours saved annually per researcher on documentation tasks $30,000-$50,000 value creation per researcher annually through increased research productivity 40% reduction in administrative support needs for report generation KPI: 90% researcher satisfaction with AI-generated content quality | Generated Content: Monthly progress reports with proper scientific formatting and visualizations Publication drafts with appropriate journal-specific structure Research proposals with compelling narratives and clear methodology descriptions Data analysis summaries with statistical interpretations Conference abstracts and presentation outlines Literature review syntheses for specific research questions Methodology documentation with enhanced clarity and reproducibility Infrastructure Requirements: Access Point: Standard workstation with secure ChatGSFC interface Storage: Encrypted local repository (5-10GB) for maintaining context between sessions Integration: API connections to reference management software and institutional databases Security: ITAR/EAR compliant information handling protocols Collaboration: Multi-user interface for team contributions to shared documents Version Control: Document history tracking and comparison features Local Processing: Occasional offline capability for field research documentation | Generated Content: Monthly progress reports with proper scientific formatting and visualizations Publication drafts with appropriate journal-specific structure Research proposals with compelling narratives and clear methodology descriptions Data analysis summaries with statistical interpretations Conference abstracts and presentation outlines Literature review syntheses for specific research questions Methodology documentation with enhanced clarity and reproducibility Infrastructure Requirements: Access Point: Standard workstation with secure ChatGSFC interface Storage: Encrypted local repository (5-10GB) for maintaining context between sessions Integration: API connections to reference management software and institutional databases Security: ITAR/EAR compliant information handling protocols Collaboration: Multi-user interface for team contributions to shared documents Version Control: Document history tracking and comparison features Local Processing: Occasional offline capability for field research documentation | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-860 | AI Based VoIP Outage Prevention | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This project is to collect the live data from Voice Over Internet Protocol (VoIP) currently resides in NASA network. Then to apply an appropriate Machine Learning technique to learn and train an AI model that can predict and prevent network outages occurring. | Saving time, labor, and money on fixing outage issues. | Predictions, and possibly recommendations. | Predictions, and possibly recommendations. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-861 | Quantification of Uncertainty Analysis Toolkit (QUAnT) | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The Quantification of Uncertainty Analysis Toolkit (QUAnT) is a digital-twin framework that informs and guides the design process of complex, large-scale, multidisciplinary systems throughout their life cycle, while maximizing resources (e.g., size, weight, power, cost, schedule). QUAnT enables: 1) orders-of-magnitude reductions in computational cost through a multi-fidelity simulation approach and the most efficient sampling techniques, 2) maximization of project resources through optimal allocation and task automation, 3) model predictive capabilities through data-driven learning (digital twins), 4) quantification of uncertainty to the maximum extent possible to efficiently identify risk drivers, 5) reliability analyses for rare events through advanced statistical methods. QUAnT lays its foundations on state-of-the-art methodologies described in peer-reviewed literature and leverages artificial intelligence and machine learning to automate and facilitate several tasks. It has successfully been applied to several engineering problems including flown NASA missions such as the James Webb Space Telescope and the ongoing Mars Sample Return, where it demonstrably brought notable savings in terms of time, cost, technical quality and efficiency. | QUAnT is expected to have a strategic, long-term, high payoff especially when used early in a project life cycle. This is advantageous as it can increase system knowledge and inform decisions when the design freedom is higher and cheaper (i.e., well before PDR, when 85% of the project’s total life cycle cost is locked in). The anticipated ROI is tied to QUAnT’s ability to guide the mission development process while cutting down on computational cost, thus bringing notable savings in terms of time, cost and efficiency. Namely, QUAnT yields better (10%-50%) margin estimates, leading to time (<1+ year) and cost (<$200M+) savings; efficiencies in analysis cycles yield time and cost savings also thanks to the elimination of obsolete tasks and the workforce needed to perform them (as an example from a real-life case: 2.5 months, $300K for a 5-person team vs. 2 weeks, $5K for 1 person applying this technology). Finally, QUAnT is mathematically proven to provide the highest-quality results, which ensures having the best information available at hand when making decisions under uncertainty. Note, the ROI estimates provided in here were derived from specific cases and can vary but do represent the correct order of magnitude. | Predictions, decisions | 06/01/2025 | b) Developed in-house | Yes | Predictions, decisions | Mission-specific data (thermal, structural, optical, etc.) | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-862 | Machine Learning effort to calculate Parker Solar Probe magnetometer offsets | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Testing various ML algorithms to model the magnetometer offset values at points in the orbit where the traditional methods are not available. | The previous methods of offset calculations for the magnetometer are not applicable as the probe passes through its closest approach to the sun each orbit. This is also the primary science period of the mission. We are trying to find an alternative method based on housekeeping information from the spacecraft itself since that is the source of the offsets affecting the magnetometer: the spacecraft contribution to the magnetic field readings. | Our ML model tries to relate these offset values (calculated using factors outside the spacecraft) to the status of the spacecraft systems and narrow in on which systems are contributing to the offset values. We were able to create a model that averaged within 4 nT of the traditional method values. Unfortunately, the sparseness of the Alfvenic wave detections limited the accuracy of the model as they are only detected at most once per day. Given this limitation, we are please we were able to get within 4 nT of the traditional method. | Our ML model tries to relate these offset values (calculated using factors outside the spacecraft) to the status of the spacecraft systems and narrow in on which systems are contributing to the offset values. We were able to create a model that averaged within 4 nT of the traditional method values. Unfortunately, the sparseness of the Alfvenic wave detections limited the accuracy of the model as they are only detected at most once per day. Given this limitation, we are please we were able to get within 4 nT of the traditional method. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-863 | Identifying global dust and smoke over ocean using MODIS sensor | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Using ML RF/CNN to detect dust in MODIS images. | Improving the dust detection than traditional physics based dust/smoke detection to provide prior knowledge for downstream retrievals and can be used for aerosol related research and public health studies. | pixel level dust, smoke identifications and probabilities | pixel level dust, smoke identifications and probabilities | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-865 | A Forecasting Scheme For Accelerated Harmful Algal Bloom Monitoring (FASTHAB) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | To develop an AI/ML forecast water quality model initial targeted for the Chesapeake bay. | It will enhance the STREAM water quality nowcast system with foreacast capabilities. | Chlorophyll-A Temperature Salinity | Chlorophyll-A Temperature Salinity | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-866 | Instantaneous photosynthetically available radiation models for ocean waters using neural networks | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Neural network forward models were developed to predict subsurface vertical profile of instantaneous photosynthetically available radiation (IPAR) for both open ocean and coastal waters. | The models can be used to estimate IPAR profiles on open ocean and coastal waters efficiently. The models are integrated into joint retrieval algorithm (FastMAPOL/component) for satellite retrieval of IPAR using polarimetric measurments. | The outputs include surface IPAR and two fitting coefficient for IPAR's vertical profile. | The outputs include surface IPAR and two fitting coefficient for IPAR's vertical profile. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-867 | Space Grade Linux | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Other | Current methodologies to deploy edge AI on spacecraft face a critical cost barrier due in part by reliance on traditional real-time operating systems. The solution lies in adopting Linux, the industry standard for edge AI deployments, offering unmatched performance, compatibility, and (AI) software ecosystem. Linux infrastructure for mission adoption however is lacking. Space Grade Linux addresses this by developing a Linux distribution tailor made for spacecraft, improving support for flight software frameworks, and establishing open-source components. In addition, this allows Space Grade Linux to accelerate development, promote code-reuse, and facilitate industry collaboration crucial to remaining competitive in the rapidly evolving field of AI. | Space Grade Linux reduces development cost of spacecraft software by allowing missions to tap into state-of-the-art AI software and hardware that would be otherwise unavailable or cost prohibitive to develop for traditional real-time operating systems. Open-source components in Space Grade Linux will also directly benefit the development of safety critical systems in other industries, such as robotics, medical devices, aerospace, and automotive. | |||||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-868 | Integrating Explainable Machine Learning with Physics for Enhanced Wildfire Detection in Observation-Constrained Environments | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Satellite-based fire detection provides critical data for fire management, fire spread modeling, air quality forecasts, and assessments of fire impacts on ecosystems and communities. Current fire detection algorithms, whether physics-based or machine learning (ML)-based, frequently fail when wildfires are obscured by dense clouds or smoke, creating data gaps that degrade the quality of air quality and fire emissions estimates. This project will develop an explainable multitask ML model for fire detection that is integrated with cloud and aerosol retrieval to enhance fire detection capabilities under clouds. | The research has the potential to subtantially improve satellite-based fire detection, a key application and societal benefit from NASA's Earth science program, based on more consistent detection of fire activity needed for fire tracking and situational awareness, and improved detection under difficult observing conditions, including on days when extreme fire behavior generates deep injection of smoke (pyrocumulonimbus or PyroCb). | Outputs will be joint retrievals of cloud and aerosol vertical profile information and fire detections. | Outputs will be joint retrievals of cloud and aerosol vertical profile information and fire detections. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-869 | The airborne Compact Fire Imager (CFI) for measurements across the entire fire lifecycle | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | CFI is a new pushbroom instrument with six spectral bands between the shortwave infrared (SWIR) and thermal infrared (TIR), including two channels in the mid-wave infrared (MWIR) specifically designed to detect and characterize flaming and smoldering fires. CFI builds on the design and performance of the dual-band Compact Thermal Imager (CTI) that collected more than 15 million images from the International Space Station (ISS) in 2019, including thousands of fires. CFI leverages the stability and proven performance of innovative Strained-Layer Superlattice (SLS) detector technology on CTI with four specific improvements for fire science and applications: 1) a larger format SLS detector array that improves cross-track resolution and swath width, 2) a custom butcher block filter that provides six specific bands for fire science and applications, 3) a custom optical design that leverages the latest infrared glass technology, and 4) an enhanced processor card that supports instrument operation and onboard fire detection using machine learning (ML) algorithms. | Onboard fire detection can accelerate the delivery of life-saving information to first responders and fire managers. The demonstration of lightweight and effective AI/ML tools for onboard processing leverages recent advances in hardware and software needed to advance edge computing for disaster applications such as wildfire detection and characterization. | Onboard AI/ML system outputs fire detection information and estimated fire radiative power (FRP), a measure of fire intensity. | Onboard AI/ML system outputs fire detection information and estimated fire radiative power (FRP), a measure of fire intensity. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-870 | Natural Language query processor for Common Metadata Repository | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | A chatgpt-like prompt query interface that uses large language models to extract intent from chat query to determine spatial, temporal and science variable filters. These filters are then applied to the Common Metadata Repository (CMR) online catalog API to return relevant earth science datasets. For example: Water temperature of Lake Michigan since 2021 would extract the spatial constraint associated with Lake Michigan, the temporal constraint associated with 'since 2021' (2021 -> present) and the remaining variable constraint of 'Water temperature' and apply those constraints to CMR. This functionality is now in our user acceptance testing environment at https://cmr.uat.earthdata.nasa.gov/search/nlp/ | Intuitive user experience through prompt-based interface Improved accuracy of search results | Recommendations of earth science datasets via search and discovery | Recommendations of earth science datasets via search and discovery | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-871 | PIX4DCloud | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Geostationary spectrometer based foundation model (ABI-FM) and evaluation on benefits for 3D cloud and convection related downstream tasks | A new all-sky EO-FM model. A thorough evaluation for downstream task benefits. A suite of 3D cloud prediction model from spetrometers. | gap filling; prediction | gap filling; prediction | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-872 | Lunar Foundation Model | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Funded by the Office of the Chief Science Data Officer, the Lunar Foundation Model (LFM) is a joint effort between GSFC, MSFC, and IBM that will harness a large and diverse array of multi-modal datasets from recent missions with the goals to 1) Create a working example of a FM that demonstrates the construction and scientific use of FMs in planetary science, 2) Expand the community of machine learning practitioners within the lunar community and across planetary science, and 3) Pilot the use of FMs to lunar science and/or lunar exploration applications. | 1) Create a working example of a FM that demonstrates the construction and scientific use of FMs in planetary science, 2) Expand the community of machine learning practitioners within the lunar community and across planetary science, and 3) Pilot the use of FMs to lunar science and/or lunar exploration applications. | |||||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-873 | MADI - Modular AI for Design and Innovation | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | MADI (Modular AI for Design and Innovation) is a decentralized, open-source AI platform that identifies unexplored research "whitespace" between scientific disciplines through secure plugin architecture and interactive visualization. Unlike traditional chatbots, MADI enables collaborative human-AI partnerships where researchers can access proprietary data through authenticated plugins while maintaining security protocols, visualize knowledge relationships through 3D network graphs, and discover cross-disciplinary innovation opportunities that conventional approaches might miss. The platform demonstrates substantial efficiency gains, including 50% reduction in brainstorming time and 1,200 FTE hours saved annually, while operating cost-effectively at $500-600 per month. Currently transitioning from NASA-internal tool to Apache 2.0 open-source release, MADI represents an attempt at "democratic cognitive symbiosis" designed to prevent AI power concentration while advancing scientific discovery through transparent, community-driven development that serves public benefit for all. | Improved innovation and design methods demonstrating 50% reduction in brainstorming session duration, saving approximately 1,200 FTE hours annually for ten-person teams. Cost-effective operations at $500-600/month with $0.05 per conversation variable costs. Accelerates cross-disciplinary research discovery by identifying unexplored "whitespace" between scientific domains. Enables secure collaboration across NASA centers without duplicating AI infrastructure investments. | Research gap identification reports, cross-disciplinary connection recommendations, interactive 3D knowledge graph visualizations, whitespace analysis summaries, technology combination suggestions, collaborative workspace insights, thought process transparency dashboards with dependency diagrams, plugin-mediated data synthesis, automated document classification (security levels), innovation opportunity assessments, and audit trails for compliance tracking. | Research gap identification reports, cross-disciplinary connection recommendations, interactive 3D knowledge graph visualizations, whitespace analysis summaries, technology combination suggestions, collaborative workspace insights, thought process transparency dashboards with dependency diagrams, plugin-mediated data synthesis, automated document classification (security levels), innovation opportunity assessments, and audit trails for compliance tracking. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-874 | Developing an ML-based subcolumn generator | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | ML generator for cloudy and precipitating subcolumns to be used in global climate models. | Improvement of cloud and precipitation processes in global climate models for more realism and better performance. | Subgrid cloud and rainfall variability from grid statistics. | Subgrid cloud and rainfall variability from grid statistics. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-875 | CHESS: Coronal Hole Extraction with Semantic Segmentation | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | This project aims at expanding the training of two Convolutional Neural Networks (CNNs) that we have already developed to obtain a more efficient, more accurate, and least-biased CNN model for segmenting coronal holes (CHs). Our two CNNs are based on (i) a U-Net and (ii) a Res-U-Net architecture for Coronal Hole Image Segmentation,with model (ii) currently being more accurate than model (i). These two CNNs have been pre-trained with the coronal hole (CH) boundary data from the Heliophysics Events Knowledgebase (HEK). These initial, pre-training data of the CH boundaries are obtained by the Spatial Possibilistic Clustering Algorithm (SPoCA) applied to images of the Atmospheric Imaging Assembly (AIA) onboard the Solar Dynamics Observatory (SDO), using the extreme ultraviolet (EUV) 193-Å filter. In many instances, this algorithm cannot differentiate between a CH and another solar structure called ""filament"". Our project will overcome this limitation by adding ground-based observations of the He I 10830 Å spectral line, which is able to provide such disambiguation. | Improved segmentation of coronal holes with (i) higher accuracy, (ii) better performance, (iii) ready to be implemented in research-to-operations pipelines. | Binary masks of coronal holes and co-spatial maps of quantified uncertainties. | Binary masks of coronal holes and co-spatial maps of quantified uncertainties. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-876 | Machine learning for X-ray astronomical spectroscopy | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We use Simulation Based Inference to construct 34000 artificial spectra that are representative of observed Active Galactic Nuclei X-ray spectra with NASA's NuSTAR X-ray telescope. Based on previous literature results for these spectra, we train, validate, and test a new neural network model, ML-mytorus, which predicts best values for key physical parameters fast, automatically, and with very high degree of accuracy, including error estimates. We make the code publicly available, and set up a dedicated webpage for the community to upload any spectrum and use the NN model. A publication is about to be submitted for review. | Reproducibility: Standard spectral fitting is done interactively with human decision making. Automation and speed: Using the trained neural network model is completely automated, and takes just seconds for a given spectrum (compare to hours/days for interactive fitting). Openly available: Anyone can go to our webpage, load X-ray spectra in 2-column format and obtain results in seconds by clicking on a single button. Extensibility: The methodology can be extended for other X-ray (and non-X-ray) telescopes producing spectra, including for other types of astronomical observations, provided telescope and physical model characteristics are known. Training only needs a modest number (tens...) of existing previous fitted observed spectra, which are statistically perturbed to produce literally thousands of simulated spectra for training. Simulations, training and validation take no more than a few days with modest high-performance computing use. | Predictions for astrophysical parameters with uncertainties | Predictions for astrophysical parameters with uncertainties | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-877 | Using MML code generation to create new high-energy astrophysics science software | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | We plan to train an LLM on previous large code bases for high-energy astrophysics science pipeline and analysis software. HEASARC is under a mandate to work on next-generation science software to modernize the existing code base and make it easier for new high-energy astrophysics missions to develop data plans at lower cost. By training LLM(s) on existing code the code assistants should be more effective in reducing the FTEs required to implement new systems. | Modernizing the software that is maintained by HEASARC and reducing the FTEs for new development | A code assistant optimized for this type of software | A code assistant optimized for this type of software | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-878 | Diffusion Modeling of the Solar Corona | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Diffusion models such as DeepMind’s GenCast have demonstrated powerful performance in terrestrial weather forecasting, achieving results on-par and surpassing leading medium-range numerical weather simulations. We present our initial results to train a Denoising Diffusion Probabilistic Model (DDPM) of the solar corona, aimed as a first step at generating synthetic solar magnetic fields based on conditioning inputs. Conditioning inputs include multi-spectral imagery, magnetograms, and other measurements commonly used to frame coronal inverse problems. Our initial experiment targets the generation of synthetic global magnetic field configurations of the solar coronal basedon conditioning with the solar cycle phase. The model is trained on 10 years of WSA potential field source surface(PFSS) model runs, augmented with 15° to 345° azimuthal rotations to increase data diversity. Training is conducted inthe spherical harmonics domain, leveraging concepts from Fourier Neural Operators (FNOs) and Spherical Fourier Neural Operators (SFNOs). A physics-informed loss function, built on a differentiable spherical harmonic expansion, isused to maximize generation of realistic 3D magnetic potentials. | First step towards probabilistic modeling of the solar corona | 3D Magnetic Field Structures in the Solar Corona | 3D Magnetic Field Structures in the Solar Corona | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-879 | Data-Mining Similar Scenarios for Uncertainty Quantification of Solar Wind Predictions at L1 | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Accurate Uncertainty Quantification (UQ) for space weather forecasts is an ever-important supplementary variable to enable accurate risk response. Modeling uncertainty is itself a “model of a model”, and one of the best datasets to describe a model’s performance is a past database of it’s predictions and after-the-fact observations. In this work, we develop a method based on k-NN and kernel regression to quantify uncertainty in the WSA solar wind model and it’s predictions of the solar wind speed at L1. By constructing state vectors that describe the current forecasting context— recent observations, recent predictions, and future predictions, we build a catalog of “similar scenarios” from past data. With a set of similar scenarios at each timestep, we can base our uncertainty on the performance in those cases. This approach—suitable for low-dimensional datasets such as time series—is extremely fast and interpretable. We find that the resulting uncertainty estimates naturally capture structured patterns in forecast error, such as shifts between solar minimum and maximum, and periodic features on the scale of half a solar rotation. | Being integrated into WSA software for operational use in Moon 2 Mars and NOAA Space Weather Prediction Center | Uncertainty Estimation (sigmas) | Uncertainty Estimation (sigmas) | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-880 | Flarenet | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Developing a CNN model to identify stellar flares in the 20-second cadence TESS data product. | Quickly and uniformly identify flares for a large number of stars, which saves time and allows for larger flare samples. | predictions for each time step whether there is likely a flare. | predictions for each time step whether there is likely a flare. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-882 | Uniting Physics and Machine Learning for Enhanced Heliophysics Insights | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The Solar Neutron TRACking (SONTRAC) instrument is designed to detect incident solar neutrons in an energy range that fills a key gap in understanding flare ion acceleration. SONTRAC tracks recoil protons generated by neutron interactions as they traverse a fiber bundle volume, depositing ionization energy along their paths. Current reconstruction of proton tracks—determining energy deposition and momentum vectors—is labor-intensive and ambiguous | Autonomous neutron event reconstruction in SONTRAC will significantly reduce manual analysis time, increase the fraction of usable neutron interaction events, and improve instrument efficiency. For NASA, this translates to higher-fidelity science return from the same mission resources, enabling better understanding of solar flare particle acceleration. For the broader public, improved solar particle monitoring enhances space weather forecasting, which protects satellite operations, communications, and astronaut safety. Key performance indicators (KPIs): •Increased percentage of reconstructed neutron events (higher detection efficiency) •Reduction in manual reconstruction effort (time savings for analysts) •Improved accuracy and reliability of neutron track reconstruction (science quality ROI) | 3D reconstructed particle tracks and interaction events from SONTRAC detector readouts. | 3D reconstructed particle tracks and interaction events from SONTRAC detector readouts. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-884 | Cloud translator for large-scale models | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Simulates frequency of occurrence of cloud types as seen from a space-based imager from radiative fluxes. | Evaluate realism of cloudiness in global climate models | 2D monthly frequency of occurrence of cloud types on a mesoscale grid | 2D monthly frequency of occurrence of cloud types on a mesoscale grid | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-886 | AI/ML for Anomaly Detection and Health Monitoring for the NSN | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The project develops ML/AI methods/tools to enhance the operation and sustainment of GSFC Space Network (ACCESS-managed) assets. We analyze telemetry from TDRS and STPSat-6 to assess state of health, detect anomalies, and predict remaining useful life (RUL) of spacecraft components. The ML/AI Algorithms are implemented in Python and MATLAB and are reported and validated with engineering teams during Sustaining ML meeting. Validated models are being integrated into a GUI to provide an intuitive platform for engineers to analyze spacecraft telemetry and support real-time monitoring. Documentation and results are summarized and posted to SharePoint. | Automated telemetry anomaly detection and forecasting reduce manual review and improve Space Network availability. In internal tests, the TDRS-8/10 Bus Voltage Limiter (BVL) detector achieved ~98% true-positive rate with very few false alarms and processed ~10 years of telemetry in ~5 minutes. Early, prioritized alerts (hours–days sooner) have surfaced incipient battery issues—e.g., two early-warning signals of failing cells—and the battery tool identified diverging cells on TDRS-9/10. Remaining-useful-life (RUL) and short-term forecasts enable condition-based maintenance and proactive scheduling, reducing unplanned downtime and associated costs. | Anomaly scores/flags from telemetry (After thresholded), Predicted Telemetry (SA current, Battery Voltage), SOH index for selected subsystems, remaining-useful-life (RUL) predictions with confidence bands, Summary of the ML/AI results for review and action by engineers. | Anomaly scores/flags from telemetry (After thresholded), Predicted Telemetry (SA current, Battery Voltage), SOH index for selected subsystems, remaining-useful-life (RUL) predictions with confidence bands, Summary of the ML/AI results for review and action by engineers. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-888 | Solar Neutron Tracking Spectrometer (SONTRAC) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | To develop new instrumentation that will be capable of measuring winds in planetary atmospheres. | Will add understanding of the mechanisms behind solar activity and its effects on space weather. | |||||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-889 | Text-to-Spaceship | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | Text-to-Spaceship is a NASA-led initiative exploring how artificial intelligence can transform mission development. The approach transforms traditional slow, linear development into a faster, iterative process, where engineers guide intent and AI rapidly handles requirements generation, design creation, and manufacturability assessment. The result is a future where missions can be conceived, iterated, and realized with unprecedented speed and rigor, expanding the boundaries of science and exploration. This initiative also strengthens NASA’s role in pioneering AI-enabled engineering, ensuring that emerging AI-for-hardware technologies are safe, rigorous, and aligned with national priorities. | This AI-powered workflow transforms traditional design processes into an integrated computational pipeline that dramatically reduces iteration time, accelerates mission development, lowers costs and risks, and empowers teams to explore breakthrough mission concepts. By automating routine engineering tasks, Text-to-Spaceship frees human experts to focus on creative problem-solving, mission-critical decisions, and technology development where their expertise provides maximum value. | Systems models, proposals, reports, hardware design artifacts such as CAD models. | Systems models, proposals, reports, hardware design artifacts such as CAD models. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-890 | Roman Space Telecope WFI Pupil alignment verification | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Using Machine Learning (ML) to determine the pupil alignment of the Wide Field Instrument of the Roman Space Telescope during the spacecraft testing in thermal vacuum conditions. The ML algorithm trained on a large data set of possible misalignments to estimate the actual misalignment from the measured data. | This is a backup technique to verify the RST WFI pupil alignment. Because it is a backup it crates redundancy in the process making it more reliable. A more reliable approach can save the project time and money by completing the verification in a more expedited way. This can ultimately help keep the project on schedule and within cost. This benefits NASA and the public. | This approach outputs a prediction of the RST WFI pupil alignment state. | This approach outputs a prediction of the RST WFI pupil alignment state. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-891 | A hybrid machine learning approach for calibration and regionalization of LSM soil and vegetation parameters | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This work explores the combined use of machine learning and traditional model calibration methods to develop a high resolution (1 km) soil and vegetation parameter dataset for North America and Central America. First, we use a traditional calibration method (i.e., genetic algorithm (GA)) to calibrate soil and vegetation parameters to Soil Moisture Active Passive (SMAP) soil moisture at 25 km spatial resolution over the CONUS, Asia, and Europe. Next, we use a series of machine learning algorithms, including Neural Network, Random Forest, and XGBoost, to downscale and regionalize the optimized parameters for the National Land Data Assimilation System - Phase 3 (NLDAS-3) domain (i.e., North and Central America). The goal is to develop high resolution parameters that produce a land surface model (i.e., Noah-MP) soil moisture climatology that is more in-line with SMAP’s SM climatology. | The ultimate benefit of this work is that it could improve land surface model simulations of the water cycle through more efficient remote sensing data assimilation (DA). Current DA methods rely on bias correction and CDF matching to translate observations (SMAP) into the same climatology as the model. However, this often filters out (as noise) useful signal about the human component of the water cycle, such as irrigation, in the observations. The ultimate goal of our work is to develop parameters that generate a model soil moisture climatology similar to SMAP, possibly bypassing the need for bias correction. As such, it may allow us to more readily incorporate information that SMAP provides about irrigation and other non-geophysical activities that are difficult to model, through DA of NASA remote sensing observations. | The system outputs are soil and vegetation parameters at 1 km resolution. | The system outputs are soil and vegetation parameters at 1 km resolution. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-892 | Mapping anthropogenic water cycle impacts in a future climate: A global digital twin for scenario-driven exploration | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This project develops an emergent constraint emulator for future changes in water storage estimates based on the available historical record of GRACE and GRACE-FO measurements. | An emulator to predict groundwater will reduce computation time relative to larger physical models. Additionally, predicting future changes in water storage estimates will better inform water managers for upcoming periods of water surplus or scarcity. Current results suggest we can predict 2-3 months into the future, and future work intends to identify the maximum number of months we can predict. | Predictions of liquid water equivalent thickness in centimeters. | Predictions of liquid water equivalent thickness in centimeters. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-893 | Nominal Forecast Sub System for the NASA Coastal Zone Digital Twin | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Development of ML based emulator for enabling what-now, what-next, and what-if analysis for flood and water quality indicators in the Chesapeake Bay, trained using Land Information System-modeled land surface variables | Daily to subdaily forecasts of flooding and water quality indicators along the Chesapeake Bay and other US coasts | What-next and what-if predictions around flood and water quality use cases | What-next and what-if predictions around flood and water quality use cases | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-894 | AI assisted Image recognition for EEE parts kitting and auditing | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Due to the limits in personnel capacity, warehouse inventory contract (TRAX), and the limits in the Goddard Material Management System (MMS), EEE parts kitting and auditing is labor intensive, and error prone. This project aimed to utilize the image recognition potential to help reduce partial man-hour involved in the kitting and auditing process. | The outcome of this project will benefit the Goddard EEE parts engineer, the Printed Wiring Assembly card lead, the EEE parts technician, the vendor who receives the kit, and the Quality Assurance (QA). The measurements KPI include: 1) The man-hour saved in generating the correct parts kitting list with all required information; 2) The accuracy of with the assistance from AI, comparing the old method. The overall benefit should consider a combination of both factors, not just focused on accuracy improvement. | A parts kitting list in spreadsheet format, against released BOMs, including the Part Number, Quantity, Manufacture, Date code/lot, batch data. | A parts kitting list in spreadsheet format, against released BOMs, including the Part Number, Quantity, Manufacture, Date code/lot, batch data. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-895 | Development of a next-generation snow and ice product (SNIP) for operational applications | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This project aims to develop interpretable AI/ML models to improve global snow depth retrieval from AMSR2 brightness temperature observations. | This project establishes a benchmark for AI/ML applications in passive microwave remote sensing and demonstrates the potential for AI/ML to substantially advance snow depth estimation capabilities. The near-real-time global snow depth product supports critical operational applications like transportation safety, weather services, and seasonal water supply planning. It also opens the possibility for improved snow depth retrievals across the entire multi-decadal passive microwave satellite era – the longest continuous dataset available for global snow monitoring. | daily 10 km global snow depth estimates | daily 10 km global snow depth estimates | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-896 | Simultaneous emulation and downscaling of modeled soil state variables with machine learning | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We propose a lightweight, computationally efficient machine learning (ML) model capable of emulating the LIS-based soil moisture and soil temperature and downscaling them from a native 10 km resolution to 1 km resolution. Our approach is extendable to other variables as long as a non-linear relationship between meteorological forcing and the variable of interest can be conceptualized as modulated by local conditions (elevation, soil type, land cover, vegetation). Then, a branched neural network (NN) architecture structurally represents this relationship. As a part of the project, different NN architectures and input combinations have been tested and assessed using SHapley Additive exPlanations (SHAP) values and ablation analysis. Currently, the downscaled product is being validated and compared to other high-resolution products. | Using the proposed method, it is possible to obtain LIS-like quality predictions for soil state variables in seconds, as well as provide unprecedented for LIS downscaled to 1 km data relevant to a wide variety of applications. The low computational cost of the inference and the ability to resolve fine-resolution features expedite obtaining the crucial information for decision-making. | The model outputs are emulated LIS-like soil moisture and soil temperature (predictions), as well as downscaled soil moisture and soil temperature. | The model outputs are emulated LIS-like soil moisture and soil temperature (predictions), as well as downscaled soil moisture and soil temperature. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-897 | Hydrology Copilot | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | We are building a multi-agent AI copilot that lets users explore and apply the new 1-km, hourly North American Land Data Assimilation System (NLDAS) version 3 dataset through natural-language queries. The copilot—built on NASA Earth Copilot with an agentic Retrieval-Augmented Generation (RAG) stack (Azure AI Search/Foundry + Synapse)—explains variables, retrieves relevant data and workflows, and guides users from question to analysis. It lowers the barrier for scientists, planners, and decision-makers to use NLDAS-3 for drought monitoring, flood assessment, and agricultural risk forecasting. | •Time-to-insight : converts plain-language questions into data queries, plots, and subsets in minutes. •Accessibility: non-experts can discover variables/units and relevant documentation without deep tooling knowledge. •Decision support: speeds drought/flood/ag risk assessments by surfacing the right NLDAS-3 variables and spatial/temporal subsets. | • Web copilot app (chat + map): variable discovery, data subsetting, previews, and downloads • Auto-generated plots/maps (time series, anomaly maps) | • Web copilot app (chat + map): variable discovery, data subsetting, previews, and downloads • Auto-generated plots/maps (time series, anomaly maps) | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-898 | Developing high-resolution multidecadal satellite remote sensing-based snow lifecycle reanalysis products over the Northern Hemisphere | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A deep learning model is being trained using a subset of meteorological forcings and remote sensing observations of snow cover to reconstruct seasonal snow water content globally. The relatively small quantity and location-independent model inputs mean that the model is transferrable globally with little computational cost. | Deep learning model outputs are being assimilated with NASA process based hydrologic models. This will provide the capability to bias correct for common model errors that are driving snow-fed hydrologic biases in operationally used NASA products. | Global, daily, and 1km-resolution SWE, and SWE uncertainty | Global, daily, and 1km-resolution SWE, and SWE uncertainty | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-900 | GLOBE Program - Image processing | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Processing photo images submitted to the GLOBE Program through the GLOBE Observer app. These photos relate to submissions of Clouds, Mosquito Habitat Mapper, Trees, and Land Cover observations. GLOBE data is processed through Amazon Rekognition SAAS (software as a service) tool to automate the image approval process (Amazon 2024). | The AI photo review system has dramatically reduced staff time needed to screen photos and therefore enabled NASA to process these crowdsourced data at a much faster speed. This leads to significantly more surface-based Earth system observations to enable science and societally relevant applications that would not be possible otherwise. | Object detection information, image approval/rejection decisions, automated blurring of detected faces and text for privacy protection, and flagging of images requiring human review. | c) Developed with both contracting and in-house resources | Mitchell Vantage Systems (MVS), subcontracted to Astrion | Yes | Object detection information, image approval/rejection decisions, automated blurring of detected faces and text for privacy protection, and flagging of images requiring human review. | Mostly uses pre-trained Amazon Rekognition models. System processes citizen science photos for validation but does not train custom models. However, a custom process was developed to evaluate the Rekognition results for decision making within the GLOBE context. | No | Yes | |||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-901 | Cloud Account Allocation Plan (CAAP) Cost Analytics Support | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative functions | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Utilize machine learning to predict and allocate Cloud Account Allocation Plan (CAAP) cost and egress limits based on past actuals. | Automates predictions based on past performance figures, saving human time and effort and enabling more accurate cost estimates. | prediction of future cost and egress | 06/01/2025 | c) Developed with both contracting and in-house resources | Raytheon | Yes | prediction of future cost and egress | on an account-by account basis, uses past cloud account incurred costs and data egress as collected in AWS via CloudFront logs. | No | Yes | ||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-902 | ML-Driven Wide Field Instrument (WFI) Image Calibration | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This machine learning-driven project aims to expedite the image calibration process for Roman Wide Field Instrument (WFI) data by developing an automated calibration system. Leveraging advanced AI algorithms, we aim to increase the speed and efficiency of image calibration, particularly for large-scale astronomical surveys and observations. | Faster calibration processes across all detector outputs Streamlined calibration workflows enable faster processing times, accelerating scientific discoveries. | Trained machine learning models capable of applying flat-field and linearity corrections to WFI images. Automatically calibrated images ready for further analysis. Adaptive calibration parameters tailored to specific observing conditions and instrument characteristics. | Trained machine learning models capable of applying flat-field and linearity corrections to WFI images. Automatically calibrated images ready for further analysis. Adaptive calibration parameters tailored to specific observing conditions and instrument characteristics. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-904 | HEASARC Bibliography | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Use an LLM to analyze existing publications and extract references to observations archived by the HEASARC | Maintain current status of the impact of the HEASARC on the advancement of the science. | Observatory and OBSID | Observatory and OBSID | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-905 | Bump in the Wire (BITW) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | BITW is a software technology designed to detect anomalous commands and trends in telemetry beyond traditional detection methods such as command parsers and telemetry limit checkers. It provides an extensible architecture that enables a pipeline of detection plugins based on Machine Learning (ML) models and signature-based algorithms | Intended benefit for this technology is to improve upon the detection of anomalous space system commands and telemetry beyond traditional command parsers and telemetry limit checkers, and to improve the time to detect anomalies by embedding the detection software on the edge computing platform such as a spacecraft. | Expected output is identification of commands anomalies in command data and telemetry values | Expected output is identification of commands anomalies in command data and telemetry values | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-906 | Point spread function (PSF) calibration using machine learning (ML) techniques. | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The Habitable Worlds Observatory (HWO) aims to image and thoroughly characterize exoEarths and is the highest priority of NASA as recommended by the Astro2020 Decadal Survey. With this goal in mind, NASA HQ has directed NASA GSFC to open a program office to facilitate advancing technologies related to HWO. To achieve the target of imaging dim exoplanets (to the order of 10^10 as compared to host star) it is necessary to create and maintain high-contrast zones in the science camera. This imposes stringent constraints on the telescope requirements. At GSFC, we are working on point spread function (PSF) calibration using machine learning (ML) techniques. oUse wavefront sensor camera images as input and science image to train various ML methods. | Cost and time saving. | Mapping between wavefront senor images to science images | Mapping between wavefront senor images to science images | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-907 | Machine learning for low-order wavefront sensing | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Use machine learning to increase the accuracy and range of low-order wavefront sensor. | Improve sensors used for imaging planets that could harbor life. | Predictions | Predictions | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-908 | IV&V Requirements Quality & Traceability Analysis | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This project develops software to generate pre-computed, draft analysis results for text-based software requirements artifacts on NASA programs within the scope of Independent Verification & Validation (IV&V). The software provides a web-based interface that enables users to filter pre-computed findings by issue severity, evaluate generative AI–produced results, and auto-draft IV&V issues. Its analysis capabilities include assessment of requirement quality attributes, decomposition of functionality, and evaluation of both upward and backward traceability. Initial development and testing leverage synthetic data hosted on AWS (Cloud – Other). Beginning in FY26, the project will transition to beta and production phases, operating on local on-premises HPC infrastructure with access to real ITAR-level NASA mission data. | Implementation of the local on-premises HPC environment and subsequent execution of AI-enhanced IV&V analysis activities will begin in FY26. These activities will be complemented by expert human engineering judgement and further analysis. By integrating AI-generated content into NASA IV&V's current processes, analysis results and assessments can be produced significantly faster than with traditional methods. This accelerated capability is expected to positively impact the mission projects supported by NASA IV&V by enabling earlier identification of issues, more thorough characterization of residual risk and impact of any open identified issues, and delivery of improved, risk-informed decision-making support to programs. | The AI system (Web UI-based) outputs requirements, their rationale, assessments (recommendations) for quality, decomposition, and traceability, and flags through color coding. Analysts will also have the opportunity to autogenerate draft issues, provide feedback on the results through buttons and comment boxes, and export results to a spreadsheet. | The AI system (Web UI-based) outputs requirements, their rationale, assessments (recommendations) for quality, decomposition, and traceability, and flags through color coding. Analysts will also have the opportunity to autogenerate draft issues, provide feedback on the results through buttons and comment boxes, and export results to a spreadsheet. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-909 | IV&V Static Code & Implementation Analysis | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This project develops software to generate pre-computed, draft analysis results for static code and code implementation artifacts on NASA programs within the scope of Independent Verification & Validation (IV&V). The tool provides a web-based interface and plug-ins for Code IDEs such as Visual Studio that enables users to filter pre-computed static code findings, review generative AI–assisted assessments, and auto-draft IV&V issues. Pending enhancements include secure coding evaluations, assessments of code traceability to other artifacts (requirements, design, test, etc.) for identifying functional gaps. Initial development and testing utilize synthetic data hosted on AWS (Cloud – Other). Beginning in FY26, the project will transition to beta and production phases, operating on local on-premises HPC infrastructure with access to real ITAR-level NASA mission data. | Implementation of the local on-premises HPC environment and subsequent execution of AI-enhanced IV&V analysis activities will begin in FY26. These activities will be complemented by expert human engineering judgement and further analysis. By integrating AI-generated content into NASA IV&V's current processes, analysis results and assessments can be produced significantly faster than with traditional methods. This accelerated capability is expected to positively impact the mission projects supported by NASA IV&V by enabling earlier identification of issues, more thorough characterization of residual risk and impact of any open identified issues, and delivery of improved, risk-informed decision-making support to programs. | The AI system will provide interaction via a plugin in Microsoft Visual Studio where the analyst can ask questions about code and initially assess static code findings. More advanced versions of this tool will pull additional context in from other NASA project artifacts such as requirements and design. Analysts will be able to genrate draft, pre-written technical issues. | The AI system will provide interaction via a plugin in Microsoft Visual Studio where the analyst can ask questions about code and initially assess static code findings. More advanced versions of this tool will pull additional context in from other NASA project artifacts such as requirements and design. Analysts will be able to genrate draft, pre-written technical issues. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-910 | IV&V Test Analysis | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This project develops software to generate pre-computed, draft analysis results for text-based test artifacts on NASA programs within the scope of Independent Verification & Validation (IV&V). The software provides a web-based interface that enables users to filter pre-computed findings by issue severity, evaluate and provide feedback for generative AI–produced results, and auto-draft IV&V issues. Its analysis capabilities include assessment of tests for completeness and consistency. Initial development and testing will utilize limited synthetic data or open source data hosted on AWS (Cloud – Other). Beginning in FY26, the project will transition to beta and production phases, operating on local on-premises HPC infrastructure with access to real ITAR-level NASA mission data. | Implementation of the local on-premises HPC environment and subsequent execution of AI-enhanced IV&V analysis activities will begin in FY26. These activities will be complemented by expert human engineering judgement and further analysis. By integrating AI-generated content into NASA IV&V's current processes, analysis results and assessments can be produced significantly faster than with traditional methods. This accelerated capability is expected to positively impact the mission projects supported by NASA IV&V by enabling earlier identification of issues, more thorough characterization of residual risk and impact of any open identified issues, and delivery of improved, risk-informed decision-making support to programs. | Simiilar to Requirements Analysis or borrowing a same feature set, the AI system (Web UI-based) will provide an assessment (recommendations) output on test cases, procedures, or steps, highlighting any potential issues to the analyst reviewer. Analysts will also have the opportunity to autogenerate draft issues, provide feedback on the results through buttons and comment boxes, and export results to a spreadsheet. | Simiilar to Requirements Analysis or borrowing a same feature set, the AI system (Web UI-based) will provide an assessment (recommendations) output on test cases, procedures, or steps, highlighting any potential issues to the analyst reviewer. Analysts will also have the opportunity to autogenerate draft issues, provide feedback on the results through buttons and comment boxes, and export results to a spreadsheet. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-911 | IV&V AI Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This project delivers a NASA IV&V AI Assistant powered by LibreChat with Retrieval-Augmented Generation (RAG) to provide a secure, conversational interface for NASA IV&V engineers. The Assistant enables users to interact directly with NASA program documents, asking questions that span subject matter expertise and program-specific content relevant to software assurance activities, including accelerated system understanding, analysis process planning, and context specific lessons learned and potential risk considerations. Initial development and evaluation are conducted with synthetic data hosted on AWS (Cloud – Other). Beginning in FY26, the project will transition to beta and production phases, operating on local on-premises HPC infrastructure with access to real ITAR-level NASA mission data, IV&V process data, and lessons learned. | The initial deployment of the NASA IV&V AI Assistant will leverage the LibreChat platform with automated embedding and Retrieval Augmented Generation (RAG) to provide end-users with efficient access to relevant, necessary knowledge to complete analysis activities and generalized inference capabilities. More advanced iterations are anticipated to include fine-tuning on IV&V-specific datasets such as historical issues, risks, and lessons learned. By integrating GenAI-powered search and chain-of-thought reasoning into IV&V workflows, the Assistant will enable engineers to rapidly locate critical information, streamline analysis activities, and enhance overall productivity. This accelerated capability is expected to positively impact the mission projects supported by NASA IV&V by enabling earlier identification of issues and delivery of analysis assurance conclusions. | The output for this system is similar to a ChatGPT-like interface, that is, the output is text information related to what the user is providing queries on. | The output for this system is similar to a ChatGPT-like interface, that is, the output is text information related to what the user is providing queries on. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-912 | IV&V Process Accelerators through Generative AI Capabilities | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This project is a collection of utility tools that generated by personnel and early adopters at the NASA IV&V Program who have explored and prototyped generative AI applications that can streamline and enhance Independent Verification & Validation (IV&V) activities. Process accelerator efforts focus on automating common, high-effort tasks such as Technical Issue Memorandum (TIM) / formal IV&V issue generation, automated peer review support of technical issues, milestone review summaries, NASA program risk mapping to open IV&V issues, and automated assurance conclusion statements. Additional concepts include automated generation of value statements, status reports, sensitivity/content classification, Tier 1 security-related queries on NASA programs, and IV&V methods generation. These exploratory activities serve as feasibility studies, demonstrating how generative AI can reduce analyst workload, accelerate analysis and reporting, and provide a foundation for future funded capabilities. Insights gained from these concepts directly inform NASA IV&V’s broader AI strategy, guiding the maturation and adoption of advanced assurance tools. | Results and positive outcomes from the NASA IV&V Process Accelerator efforts directly inform and strengthen larger, funded AI initiatives. These exploratory activities have demonstrated how generative AI can streamline traditionally resource and labor-intensive tasks. Even when lightly validated using synthetic or open-source data, these proofs of concept activities highlight that routine, high-effort activities can be effectively automated to a practical degree. This includes processing vast quantities of data in a deterministic manner and providing an end user product for human review and additional analysis, where needed. Collectively, these concepts showcase the potential of generative AI to enhance efficiency, reduce analyst workload, and provide a foundation for more advanced AI-driven capabilities in support of NASA's missions. | Outputs on these concept development efforts span pre-processed spreadsheets, large generated Word Documents, and outputs driven to other tools for human review and assessment. Outputs inform feasibility of generative AI capabilities and further implementation plans. | Outputs on these concept development efforts span pre-processed spreadsheets, large generated Word Documents, and outputs driven to other tools for human review and assessment. Outputs inform feasibility of generative AI capabilities and further implementation plans. | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-914 | Estimating Ocean Color Properties from TEMPO | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Developing hourly ocean color retrievals across North America using geostationary TEMPO instrument | First hourly ocean color retrievals across North America will improve monitoring of ocean health | Chlorophyll Concentration, remote sensing reflectance | Chlorophyll Concentration, remote sensing reflectance | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-915 | Development of a Next-Generation Ensemble Prediction System for Atmospheric Composition | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The project's goal is to reduce the computational burden of atmospheric composition modeling at the GMAO, by building an AI emulator for the GEOS Composition Forecast model. | The project could lead to significant saving in the computational costs of atmospheric composition forecasts for NASA and could provide the public with longer range probabilistic forecasts of air quality. | Global forecasts of air pollutants | Global forecasts of air pollutants | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-916 | XMM-GPT | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | XMM-GPT is a domain-specialized AI assistant built by fine-tuning Google's Open-Source FLAN family of LLMs through transfer learning on up-to-date XMM-Newton Documentation. The model is then integrated into a retrieval-augmented generation (RAG) pipeline to ensure that responses are grounded in mission documentation. This reduces model hallucinations and introduces transparency in its decision making process by relaying where in the documentation it got its answer. XMM-GPT acts as an interactive research aide, supporting a wide-range of users (students, researchers, teachers, PIs) in learning about XMM-Newton and its software the SAS. | Lower entry barriers for new users learning about XMM-Newton and its workflows. Coding assistant that helps to create scripts, explain SAS tasks, and point users to what is causing their coding bugs. Offloads "help desk" queries from mission staff. | Instructional Responses (step-by-step guides for SAS tasks), Code Assistance, Explanations/Summarizations of XMM relevant information, Grounded Answers (links to documentation that holds the answer) | Instructional Responses (step-by-step guides for SAS tasks), Code Assistance, Explanations/Summarizations of XMM relevant information, Grounded Answers (links to documentation that holds the answer) | |||||||||||||||||||||
| National Aeronautics And Space Administration | GSFC: Goddard Space Flight Center | NASA-917 | SAS Vision | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | SAS VISION is a convolutional neural network (CNN)-based computer vision model designed to identify previously undocumented astronomical objects such as AGNs, NGCs, and stellar sources using large volumes of XMM-Newton observational data. This system fine-tunes open source CNN through transfer learning using XMM-Newton raw FITS files and their corresponding SAS pipeline-processed outputs. It performs a two-stage process: Denoising & Enhancement (using DnCNN, AutoEncoders, U-Net/W-Net, or DDPMs to preprocess and enhance noisy X-ray data), and Object Identification (through CNNs with instance segmentation (Mask R-CNN), enabling detection and classification of celestial objects. In addition to this, cross-observations will enable spatial matching of object features to confirm repeat detections. This model could enable large-scale automated cataloging of undocumented objects not present in SIMBAD, Vizier, or the databases queried by Aladin. | Discovery of New Objects, Accelerates Identification tasks, Improved Catalogs | Denoised Images, Segmentation Masks, Object Classifications, Multi-Observation Object Tracker | Denoised Images, Segmentation Masks, Object Classifications, Multi-Observation Object Tracker | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-920 | Adaptive State-Space Control via Neuromorphic Computing | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | Neuromorphic hardware for autonomous control logic adaptation in robotics/aerospace | This use case addresses the limitations of traditional, fixed-logic control systems by implementing an adaptive state-space model on neuromorphic hardware. By representing a system's states and transitions as a network of spiking neurons and plastic synapses, the governing matrices of the control model are transformed from static constants into dynamic variables that evolve based on real-time sensory input and operational feedback. This enables the system to autonomously learn from its environment, optimize its own control logic, and adapt to unforeseen conditions, creating a foundation for truly intelligent and resilient autonomous systems in fields such as advanced robotics and aerospace. | governing matrices of the control model | governing matrices of the control model | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-921 | AI-Enhanced Mission Operations Reporting & Decision Support System | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | NASA's Mission Control currently relies on a manual CHIT (Mission Action Request) system for real-time operational decisions, a process that places a high cognitive load on flight controllers who must complete forms, identify all stakeholders, and search historical data with basic tools, making it difficult to spot subtle trends. This use case proposes transforming this system by integrating an AI engine to streamline operations. | The proposed solution would leverage Natural Language Processing (NLP) to enable controllers to initiate CHITs using plain language, while machine learning models would proactively analyze data to identify emerging trends and provide context-aware smart search results from historical mission data. Additionally, the AI would perform an initial automated impact assessment on new requests, flagging potential risks to enhance the speed and safety of decision-making. | context-aware smart search results | context-aware smart search results | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-922 | Athena (EM32 Document Intelligence) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Converts 490K+ EM32 repository files into a searchable knowledge fabric. Supports 691+ file extensions, builds cross-document graphs, enables text-to-image queries, OCR, and compliance-aware retrieval. Provides sub-second query response for cached content. | Converts 490K+ EM32 repository files into a searchable knowledge fabric. Supports 691+ file extensions, builds cross-document graphs, enables text-to-image queries, OCR, and compliance-aware retrieval. Provides sub-second query response for cached content. | cross-document graphs, enables text-to-image queries, OCR, and compliance-aware retrieval. | cross-document graphs, enables text-to-image queries, OCR, and compliance-aware retrieval. | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-923 | Autonomous Navigation Decision Making for Lunar/Martian Rovers | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Enable a rover (or rover swarm) to autonomously collect lunar regolith, haul it to a worksite, and execute construction tasks (e.g., berms, landing pad layers, trenching, or feedstock staging for sintering/printing) using AI for perception, mapping, planning, control, and health monitoring. | Enable a rover (or rover swarm) to autonomously collect lunar regolith, haul it to a worksite, and execute construction tasks (e.g., berms, landing pad layers, trenching, or feedstock staging for sintering/printing) using AI for perception, mapping, planning, control, and health monitoring. | execute construction tasks (e.g., berms, landing pad layers, trenching, or feedstock staging for sintering/printing) | execute construction tasks (e.g., berms, landing pad layers, trenching, or feedstock staging for sintering/printing) | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-924 | Autonomous Object Manipulation Decision Making for Lunar/Martian Rovers | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Allow a robotic arm(s) on the lunar surface (or inside a habitat/lander bay) to autonomously identify, retrieve, and swap between different tools and payloads, then execute a sequence of tasks (e.g., drilling, sample collection, regolith manipulation, structural assembly, or power connector mating). | Allow a robotic arm(s) on the lunar surface (or inside a habitat/lander bay) to autonomously identify, retrieve, and swap between different tools and payloads, then execute a sequence of tasks (e.g., drilling, sample collection, regolith manipulation, structural assembly, or power connector mating). | Allow a robotic arm(s) on the lunar surface (or inside a habitat/lander bay) to autonomously identify, retrieve, and swap between different tools and payloads, then execute a sequence of tasks (e.g., drilling, sample collection, regolith manipulation, structural assembly, or power connector mating). | Allow a robotic arm(s) on the lunar surface (or inside a habitat/lander bay) to autonomously identify, retrieve, and swap between different tools and payloads, then execute a sequence of tasks (e.g., drilling, sample collection, regolith manipulation, structural assembly, or power connector mating). | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-925 | BoundState (Compliance & Risk Engine) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Validates engineering parts, materials, and BOMs against NASA, ASTM, and program standards. Flags PFAS/non-compliant risks, generates property-based alternative recommendations with confidence bounds. Converts compliance into proactive, evidence-based design intelligence. | Validates engineering parts, materials, and BOMs against NASA, ASTM, and program standards. Flags PFAS/non-compliant risks, generates property-based alternative recommendations with confidence bounds. Converts compliance into proactive, evidence-based design intelligence. | Flags PFAS/non-compliant risks, generates property-based alternative recommendations with confidence bounds. Converts compliance into proactive, evidence-based design intelligence. | Flags PFAS/non-compliant risks, generates property-based alternative recommendations with confidence bounds. Converts compliance into proactive, evidence-based design intelligence. | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-927 | Data analysis and pattern recognition | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Agentic AI | We are often found using optical evaluation on specimens that do not necessarily have a textbook approach to evaluation. By feeding into the knowledge base and creating an agent Chat GSFC can give some reasonable evaluation and cross reference our thoughts with related publications. It can also recommend test data and perform evaluations of the raw data, say from an XRD readout rather quickly as compared to doing it by hand. The key is to feed it known calibrated data for its knowledge base. | We are often found using optical evaluation on specimens that do not necessarily have a textbook approach to evaluation. By feeding into the knowledge base and creating an agent Chat GSFC can give some reasonable evaluation and cross reference our thoughts with related publications. It can also recommend test data and perform evaluations of the raw data, say from an XRD readout rather quickly as compared to doing it by hand. The key is to feed it known calibrated data for its knowledge base. | evaluations of raw data | b) Developed in-house | Yes | evaluations of raw data | optical evaluations on specimens | No | No | ||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-928 | Dirty Vacuum rated 6-Axis Robotic Arm Toolpathing | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | MSFC has acquired a custom dirty vacuum rated 6-axis robotic arm from Motiv space systems. This robotic arm while purpose built for manufacturing does require custom toolpathing codes. I used Chat GSFC extensively to write a custom additive manufacturing slicer, gcode parser, and some quality of life upgrades for using the system. We completed various world wide first additive manufacturing builds on our home grown directed energy deposition system using these codes. It also was used to de-bug and work through any issues and errors that arose | MSFC has acquired a custom dirty vacuum rated 6-axis robotic arm from Motiv space systems. This robotic arm while purpose built for manufacturing does require custom toolpathing codes. I used Chat GSFC extensively to write a custom additive manufacturing slicer, gcode parser, and some quality of life upgrades for using the system. We completed various world wide first additive manufacturing builds on our home grown directed energy deposition system using these codes. It also was used to de-bug and work through any issues and errors that arose | software | b) Developed in-house | Yes | software | energy deposition data | No | No | ||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-929 | Gap-filled and downscaled vegetation composited index derived from satellite optical observations | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This project is in partnership with ARC and GeoNEX under the Ecological Conservation NASA Earth Action program. It seeks to leverage Harmonized Landsat Sentinel-2 (HLS) data to provide gap-filled/downscaled NDVI composite using NASA's Prithvi model. | This project is in partnership with ARC and GeoNEX under the Ecological Conservation NASA Earth Action program. It seeks to leverage Harmonized Landsat Sentinel-2 (HLS) data to provide gap-filled/downscaled NDVI composite using NASA's Prithvi model. | gap-filled/downscaled NDVI composite | gap-filled/downscaled NDVI composite | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-930 | Lunar Foundation Model for Planetary Sciences | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Foundation Model trained on LRO WAC, NAC, and RTM imagery to assist lunar scientists with AI applications regarding the surface processes of the moon | Foundation Model trained on LRO WAC, NAC, and RTM imagery to assist lunar scientists with AI applications regarding the surface processes of the moon | surface processes of the moon | surface processes of the moon | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-931 | Lunar Rover Localization | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | AI to merge and analyze lunar reconnaissance data with terrain relative navigation techniques utilizing on board sensors to generate high resolution localized maps. Inputs are rover external sensor data & satellite imagery which generates a map for detailed navigation | high resolution localized maps | a map for detailed navigation | a map for detailed navigation | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-932 | Machine learning for prediction and factor analysis of laser forming parameters | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Various machine learning techniques (neural network, support vector machine, etc.) employed to predict bending angle of sheet metal subjected to laser forming processes and to elucidate most pertinent factors via SHAP analysis | Various machine learning techniques (neural network, support vector machine, etc.) employed to predict bending angle of sheet metal subjected to laser forming processes and to elucidate most pertinent factors via SHAP analysis | most pertinent factors in predicting bending angle | b) Developed in-house | No | most pertinent factors in predicting bending angle | materials data | No | No | ||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-933 | MIRA (MAPTIS NLP Interface) | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | NLP interface for querying legacy test reports in MAPTIS | Transforms legacy MAPTIS test reports into structured, queryable data. Enables natural language queries across test classes with validated answers tied to MAPTIS IDs, test dates, and methods. Achieved semantic parity for Flammability and Arc Tracking classes; pathway established for remaining 22 classes. | structured, queryable data | structured, queryable data | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-934 | Object classification, change detection, and anomaly detection in LiDAR and other point cloud datasets. | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Leveraging AI for point cloud analysis may advance accuracy and efficiency of: •Point cloud classification, enhancing capabilities for LiDAR based autonomy and navigation. •Change detection, enriching scientific investigations of geomorphology and surface processes. •Anomaly detection, enhancing inspection of hardware and monitoring of manufacturing processes. | Leveraging AI for point cloud analysis may advance accuracy and efficiency of: •Point cloud classification, enhancing capabilities for LiDAR based autonomy and navigation. •Change detection, enriching scientific investigations of geomorphology and surface processes. •Anomaly detection, enhancing inspection of hardware and monitoring of manufacturing processes. | Leveraging AI for point cloud analysis may advance accuracy and efficiency of: • Point cloud classification, enhancing capabilities for LiDAR based autonomy and navigation. • Change detection, enriching scientific investigations of geomorphology and surface processes. • Anomaly detection, enhancing inspection of hardware and monitoring of manufacturing processes. | Leveraging AI for point cloud analysis may advance accuracy and efficiency of: • Point cloud classification, enhancing capabilities for LiDAR based autonomy and navigation. • Change detection, enriching scientific investigations of geomorphology and surface processes. • Anomaly detection, enhancing inspection of hardware and monitoring of manufacturing processes. | |||||||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-935 | RDUST Regolith Dust Universal System Tracker | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | AI for real-time dust environment simulation and analysis | AI for real-time dust environment simulation and analysis | AI for real-time dust environment simulation and analysis | b) Developed in-house | No | AI for real-time dust environment simulation and analysis | AI for real-time dust environment simulation and analysis | No | No | ||||||||||||||||
| National Aeronautics And Space Administration | MSFC: Marshall Space Flight Center | NASA-936 | Research to Operations Platform | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Platform that leverages foundation models created by NASA IMPACT AI for Science to allow users to inference on the fine-tuned models and visualize the results | Platform that leverages foundation models created by NASA IMPACT AI for Science to allow users to inference on the fine-tuned models and visualize the results | visualizations | visualizations | |||||||||||||||||||||
| National Archives And Records Administration | National Archives & Records Administration (NARA) | NARA - 0001 | Generative AI Solutions for Workplace Productivity (aka Gemini) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to improve employee productivity by providing smarter tools for employees to summarize, draft, and navigate operational records. | NARA is exploring how Generative AI tools can help employees work more efficiently and creatively. These AI tools can automate tasks, enhance communication and collaboration, and improve decision-making. | This AI solution will help with draft emails and documents, create images, analyze data, take meeting notes, and manage workflows. | 01/09/2025 | c) Developed with both contracting and in-house resources | Yes | This AI solution will help with draft emails and documents, create images, analyze data, take meeting notes, and manage workflows. | Agency is strictly using its internal data for the purpose of prototype work. The data is internally available G-suite (Google Drive, Chat, Meeting, Gmail etc.). | Yes | None of the above | No | ||||||||||||||
| National Archives And Records Administration | National Archives & Records Administration (NARA) | NARA - 0002 | A1 Museum AI Project | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the problem of limited document discoverability and intensive manual labor by using automated tagging to personalize the visitor experience and make millions of digital records instantly accessible. | NARA will use AI generated tags and topics to recommend and make records available to visitors from a pool of approximately 2 million digital records, freeing up staff time and enhancing and personalizing the user experience for A1 museum visitors. | This AI use provides enhanced visitor access to NARA records through an easy-to-use interface that identifies NARA records based on visitor responses to a chatbot conversation. | 01/10/2025 | a) Purchased from a vendor | Cortina | Yes | This AI use provides enhanced visitor access to NARA records through an easy-to-use interface that identifies NARA records based on visitor responses to a chatbot conversation. | The data is publicly available at catalog.archives.gov | https://catalog.data.gov/dataset/archival-descriptions-from-the-national-archives-catalog | No | https://www.archives.gov/files/privacy/privacy-impact-assessments/NAC%20PIA%20final%202017.pdf | None of the above | Yes | https://www.archives.gov/files/privacy/privacy-impact-assessments/NAC%20PIA%20final%202017.pdf | ||||||||||
| National Archives And Records Administration | National Archives & Records Administration (NARA) | NARA - 0003 | NARA@WORK Kendra Search | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is intended to solve the problem of inefficient employee information discovery by replacing outdated keyword search with a natural language system that provides instant, context-aware answers from disparate agency resources. | NARA is implementing an AI-driven contextual information retrieval tool (e.g., Amazon Kendra) to modernize search functionality on the NARA@Work internal site. This will enable staff to use Natural Language Processing (NLP) to query a wide range of agency information (Work Life, Benefits, Policies, etc.) and receive "Suggested Answers." | This AI-powered tool will drive significant productivity gains across the agency by enabling staff to find critical information faster and more efficiently. By utilizing contextual retrieval and "Suggested Answers," the system ensures higher quality and more accurate results compared to traditional search methods. | 01/09/2025 | b) Developed in-house | Yes | This AI-powered tool will drive significant productivity gains across the agency by enabling staff to find critical information faster and more efficiently. By utilizing contextual retrieval and "Suggested Answers," the system ensures higher quality and more accurate results compared to traditional search methods. | The data is internally (Intranet) available at internal site work.nara.gov | No | None of the above | No | ||||||||||||||
| National Archives And Records Administration | National Archives & Records Administration (NARA) | NARA - 0004 | Amelia Earhart AI Search | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is intended to solve the manual search limitations when processing massive historical datasets by using Natural Language Processing to accurately locate and declassify specific records required for public release | The intended purpose of this AI Proof of Concept (POC) is to efficiently retrieve and prepare for release all government records related to Amelia Earhart and her final trip, following the Presidential directive of September 26, 2025. The expected benefit is to leverage Natural Language Processing (NLP) search capabilities to overcome the limitations of traditional search methods, enabling the National Archives to effectively and accurately locate the necessary results from vast holdings and fulfill its legal mandate to make these specific historical records accessible to the American public. | This AI system's outputs will go beyond traditional keyword retrieval to identify, connect, and surface all relevant Amelia Earhart records, including contextual details about her final trip, thus providing researchers with better search results and showing connections between disparate records, making the overall research process easier and more insightful. | 01/12/2025 | b) Developed in-house | Yes | This AI system's outputs will go beyond traditional keyword retrieval to identify, connect, and surface all relevant Amelia Earhart records, including contextual details about her final trip, thus providing researchers with better search results and showing connections between disparate records, making the overall research process easier and more insightful. | The data is publicly available at Archives.gov | No | https://www.archives.gov/files/privacy/privacy-impact-assessments/NAC%20PIA%20final%202017.pdf | None of the above | Yes | https://www.archives.gov/files/privacy/privacy-impact-assessments/NAC%20PIA%20final%202017.pdf | ||||||||||||
| National Archives And Records Administration | National Archives & Records Administration (NARA) | NARA - 0005 | AI Pilot Project to Screen and Flag for Personally Identifiable Information (PII) in Digitized Archival Records | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the problem of manual processing bottlenecks and privacy risks by automating the identification and redaction of sensitive personal information within massive digitized collections to ensure safe and rapid public access. | NARA is using AI to automatically find and remove sensitive personal information from its digitized historical records. This will make the process faster, more accurate, and better protect privacy while allowing for greater public access to these records. The AI will also help NARA manage its vast collection and adapt to future privacy needs. | This AI use case will produce redacted records, a prioritized list for redaction, and a tool for staff to redact unpublished records. | 01/07/2024 | b) Developed in-house | No | This AI use case will produce redacted records, a prioritized list for redaction, and a tool for staff to redact unpublished records. | Agency is strictly using its data to test the pre-trained model for the purpose of prototype work. The data is publicly available at catalog.archives.gov. | No | None of the above | Yes | ||||||||||||||
| National Archives And Records Administration | National Archives & Records Administration (NARA) | NARA - 0006 | AI based Semantic Search for National Archives Catalog (aka ArchiAI) | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is intended to solve the problem of "unsophisticated" keyword-based search limitations by implementing semantic search that understands user intent and historical context, allowing researchers to find relevant records even when they don't match exact search terms. | NARA is using AI to create a smarter search engine for its online catalog. This new search understands the meaning behind your search, not just the keywords, and can even find hidden connections between documents. This will make research faster, easier, and more insightful. | This AI search tool will provide better search results and show connections between records, making research easier and more insightful. | 01/09/2024 | b) Developed in-house | Yes | This AI search tool will provide better search results and show connections between records, making research easier and more insightful. | Agency is strictly using its data to test the pre-trained model for the purpose of prototype work. The data is publicly available at catalog.archives.gov. | No | None of the above | Yes | ||||||||||||||
| National Archives And Records Administration | National Archives & Records Administration (NARA) | NARA - 0007 | Auto-fill of Descriptive Metadata for Archival Descriptions | a) Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the problem of the "descriptive gap" created by labor-intensive manual cataloging by automatically generating metadata and summaries from document content, allowing NARA to process massive backlogs and make millions of records instantly searchable for the public. | NARA is using AI to automatically create descriptions (metadata) for its digital archives. This will save archivists time, make records easier to find, and help the public better understand the information in the National Archives Catalog. | This AI use case will create more complete and informative descriptions for records, making them easier to find in the National Archives Catalog. | 01/09/2024 | b) Developed in-house | No | This AI use case will create more complete and informative descriptions for records, making them easier to find in the National Archives Catalog. | The data is publicly available at catalog.archives.gov | https://catalog.data.gov/dataset/archival-descriptions-from-the-national-archives-catalog | No | https://www.archives.gov/files/privacy/privacy-impact-assessments/NAC%20PIA%20final%202017.pdf | None of the above | Yes | https://www.archives.gov/files/privacy/privacy-impact-assessments/NAC%20PIA%20final%202017.pdf | |||||||||||
| National Archives And Records Administration | National Archives & Records Administration (NARA) | NARA - 0008 | Topic Summarizer and Entity Extraction using AI | a) Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the problem of unsearchable digital collections caused by massive descriptive backlogs, automating the creation of metadata for billions of digital objects to ensure they are immediately discoverable and understandable for the public without years of manual archival processing. | NARA plans to use AI to automatically create descriptions for its digital objects. This will free up archivists' time and make it easier for people to find and understand the digital materials in NARA's system. | This AI use case will output enriched metadata making NARA's digital assets more accessible and easier to manage. | 01/08/2026 | b) Developed in-house | No | This AI use case will output enriched metadata making NARA's digital assets more accessible and easier to manage. | No | None of the above | Yes | |||||||||||||||
| National Archives And Records Administration | National Archives & Records Administration (NARA) | NARA - 0009 | Create an AI based knowledge articles user interface for working with CRG documents | a) Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is intended to improve the case processing by providing staff with an automated retrieval tool that navigates the complex Case Reference Guide to deliver instant, accurate answers for veteran and personnel record requests. | NARA is developing an AI-powered tool to help its National Personnel Records Center (NPRC) staff quickly find information in their Case Reference Guide (CRG). This will improve the speed and accuracy of their work, leading to better customer service and less time spent training new employees. | This AI tool will give NPRC staff a user-friendly way to quickly find information in the CRG knowledge base. | 01/10/2024 | b) Developed in-house | No | This AI tool will give NPRC staff a user-friendly way to quickly find information in the CRG knowledge base. | Agency is strictly using its internal data for the purpose of prototype work. The data is internally available at https://spdr.nara.gov/ | No | None of the above | No | ||||||||||||||
| National Archives And Records Administration | National Archives & Records Administration (NARA) | NARA - 0010 | EOP 42 Search PoC using AI Based Semantic Search | a) Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is intended to solve the problem of inefficient information retrieval within Presidential email archives by testing whether AI-driven semantic search can understand context and intent more effectively than traditional keyword matching to improve record accessibility. | The purpose of this Proof of Concept (POC) is to evaluate whether AI-driven semantic search provides a demonstrable improvement over the current keyword-based search experience for accessing EOP 42 digital records, including email archives. | The AI system's output will be significantly improved search results that are contextually relevant (semantic), effectively bypassing the limitations of simple keyword matching within EOP 42 digital records, thereby offering a demonstrable improvement in the search experience for users accessing materials, including complex email archives. | 01/12/2026 | a) Purchased from a vendor | Skylight | Yes | The AI system's output will be significantly improved search results that are contextually relevant (semantic), effectively bypassing the limitations of simple keyword matching within EOP 42 digital records, thereby offering a demonstrable improvement in the search experience for users accessing materials, including complex email archives. | Yes | None of the above | Yes | ||||||||||||||
| National Archives And Records Administration | National Archives & Records Administration (NARA) | NARA - 0011 | Freedom of Information Act (FOIA) Discovery AI Pilot | a) Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is intended to solve the problem of larger FOIA backlogs and manual review bottlenecks by automating the discovery of relevant records and the redaction of sensitive data, ensuring responses are both rapid and compliant with privacy laws. | NARA is piloting the use of AI to improve its responses to Freedom of Information Act requests. This includes AI tools for finding relevant records and automatically redacting sensitive information, which will make the process faster, more accurate, and compliant with privacy laws. | This AI use case will output relevant records matching FOIA requests and redacted versions of those records with sensitive information removed. | 01/12/2026 | b) Developed in-house | No | This AI use case will output relevant records matching FOIA requests and redacted versions of those records with sensitive information removed. | NARA will train this AI use case based on previous FOIA requests that were released to public | No | None of the above | Yes | ||||||||||||||
| National Archives And Records Administration | National Archives & Records Administration (NARA) | NARA - 0012 | Archives.gov AI search | a) Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is intended to solve the problem of "unstructured data challenges" in a vast, complex catalog by replacing traditional literal-match search with semantic technology that understands researcher intent, connects disparate records, and surfaces relevant results that would otherwise remain hidden behind inconsistent terminology. | NARA is using AI to create a smarter search engine for its online Archives.gov public site. This new search understands the meaning behind your search, not just the keywords, and can even find hidden connections between documents. This will make research faster, easier, and more insightful. | This AI search tool will provide better search results and show connections between records, making research easier and more insightful. | 01/08/2026 | b) Developed in-house | Yes | This AI search tool will provide better search results and show connections between records, making research easier and more insightful. | The data is publicly available at Archives.gov | https://catalog.data.gov/dataset/archives-gov | No | None of the above | Yes | |||||||||||||
| National Archives And Records Administration | National Archives & Records Administration (NARA) | NARA - 0013 | Develop a Natural Language Based Chat Interface (like ChatGPT) to Interact With the Archival Documents | a) Pre-deployment – The use case is in a development or acquisition status. | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the problem of high barriers to entry for historical research by replacing complex database queries with a natural language interface that allows users of all skill levels to discover records and gain insights through simple conversation. | NARA plans to develop an AI-powered chat interface, similar to ChatGPT, that will allow users to easily explore its digital archives by asking questions in everyday language. This will make historical documents more accessible and engaging, leading to more efficient research and a deeper understanding of history. | This AI will create a chat interface that lets users easily get information from NARA's archives by having a conversation with it. | 01/09/2027 | b) Developed in-house | No | This AI will create a chat interface that lets users easily get information from NARA's archives by having a conversation with it. | No | None of the above | Yes | ||||||||||||||||
| National Archives And Records Administration | National Archives & Records Administration (NARA) | NARA - 0014 | Automated Data Discovery and Classification Pilot | a) Pre-deployment – The use case is in a development or acquisition status. | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the problem of inefficient manual data governance and risk assessment by testing automated data classification techniques to organize unstructured datasets, accelerate information discovery, and more accurately identify potential security or privacy vulnerabilities. | NARA plans to test how well AI can automatically organize and categorize its data. This pilot project will help NARA understand its data better, find information faster, improve efficiency, and manage risks more effectively. | This AI pilot will categorize data, improve search, and potentially train AI to identify new document types. | 01/09/2027 | b) Developed in-house | No | This AI pilot will categorize data, improve search, and potentially train AI to identify new document types. | No | None of the above | No | ||||||||||||||||
| National Credit Union Administration | NCUA 01 | Machine Learning Data Validation | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Flags anomalies in Call Report data to improve data quality and reduce manual review | Improve Call Report data quality | Provide lists of potential data outliers for each credit union | 02/01/2023 | b) Developed in-house | No | Provide lists of potential data outliers for each credit union | NCUA Quarterly Call Report Data. | No | Other | Yes | |||||||||||||||
| National Credit Union Administration | NCUA-02 | Supervisory Stress Testing | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Forecasts loan performance (repayment, defaults) to support stress testing and risk analysis | Estimate loan default probability for Supervisory Stress Testing. | Analyzes credit union data to predict cash flows under economic stress scenarios | b) Developed in-house | No | Analyzes credit union data to predict cash flows under economic stress scenarios | Vendor sourced. Use some NCUA credit union and Call Report Data. | No | Age | Yes | ||||||||||||||||
| National Credit Union Administration | NCUA-03 | Risk Indicator Model | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identifies credit unions risks to proactively manage NCUSIF exposure | Support credit union supervision | Projected supervisory resource considerations | 01/01/2025 | b) Developed in-house | No | Projected supervisory resource considerations | Call report and examination data. | No | No | Yes | |||||||||||||||
| Office Of Personnel Management | Microsoft Copilot | Production | Deployed | Low-impact | PTA, ATO | |||||||||||||||||||||||||||||
| Office Of Personnel Management | OpenAI ChatGPT | Pilot | Pilot | Low-impact | PTA, waiver | |||||||||||||||||||||||||||||
| Office Of Personnel Management | Anthropic Claude | Sandbox | Pre-deployment | Low-impact | Under review | |||||||||||||||||||||||||||||
| Office Of Personnel Management | USA Class | Production | Deployed | Medium-impact | AI Impact Assessment, PTA, ATO | |||||||||||||||||||||||||||||
| Office Of Personnel Management | OPM Rexi Chatbot | Production | Deployed | Medium-impact | PTA, waiver | |||||||||||||||||||||||||||||
| Pension Benefit Guaranty Corporation | ITIOD/OGC Privacy | PBGC - 01 | Synthetic Data Generation | a) Pre-deployment – The use case is in a development or acquisition status. | Pre-deployment | Generative AI | Automation, Content Creation Addressing the need to test systems using non-production data | Efficient creation of test data that closely mimics production data, while protecting PII/CUI/etc. (scheduled and/or on-demand), to increase speed and accuracy of system testing | Automation, Content Creation Addressing the need to test systems using non-production data | Efficient creation of test data that closely mimics production data, while protecting PII/CUI/etc. (scheduled and/or on-demand), to increase speed and accuracy of system testing | ||||||||||||||||||||||||
| Pension Benefit Guaranty Corporation | ITIOD - Security Operations | PBGC - 02 | IT Security Monitoring | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Deployed | Generative AI | Automation, Data Analysis Safeguarding PBGC data and content | Machine Learning capabilities to enhance IT Security monitoring to protect PBGC assets, people, and data. | Automation, Data Analysis Safeguarding PBGC data and content | Machine Learning capabilities to enhance IT Security monitoring to protect PBGC assets, people, and data. | ||||||||||||||||||||||||
| Pension Benefit Guaranty Corporation | PBGC Enterprise | PBGC - 03 | Intelligent, auto- generated meeting summaries, notes, and tasks | a) Pre-deployment – The use case is in a development or acquisition status. | Pre-deployment | Generative AI | Automation, Content Generation Meeting and task documentation, post meeting analysis, decision tracking, etc. are manual administrative actions that may take time and resources away from focusing on higher benefit activities. | Automatic, auto-generated information and actions to provide benefit to the personnel in realizing time savings, instant accessibility to summary information, and many additional productivity gains. | Automation, Content Generation Meeting and task documentation, post meeting analysis, decision tracking, etc. are manual administrative actions that may take time and resources away from focusing on higher benefit activities. | Automatic, auto-generated information and actions to provide benefit to the personnel in realizing time savings, instant accessibility to summary information, and many additional productivity gains. | ||||||||||||||||||||||||
| Pension Benefit Guaranty Corporation | OMA / QMD / LDD | PBGC - 04 | AI-Assisted Training Development | a) Pre-deployment – The use case is in a development or acquisition status. | Pre-deployment | Generative AI | Content Generation Content (text, audio, image/graphic) generation is a manual activities. | AI-Assisted training/instructional design creation to boost productivity, reduce training development time, and improve effectiveness and efficiencies of training deployment. | Content Generation Content (text, audio, image/graphic) generation is a manual activities. | AI-Assisted training/instructional design creation to boost productivity, reduce training development time, and improve effectiveness and efficiencies of training deployment. | ||||||||||||||||||||||||
| Pension Benefit Guaranty Corporation | OMA/Procurement Dept. | PBGC - 05 | AI-Assisted Contracting Content/Document Creation | a) Pre-deployment – The use case is in a development or acquisition status. | Pre-deployment | Generative AI | Automation, Content Generation Federal contracting/procurement can be an arduous and time- consuming task. | AI-Assisted contract/acquisition document drafting to save time, resource, and provide better alignment to Agency policies, FAR, etc. | Automation, Content Generation Federal contracting/procurement can be an arduous and time- consuming task. | AI-Assisted contract/acquisition document drafting to save time, resource, and provide better alignment to Agency policies, FAR, etc. | ||||||||||||||||||||||||
| Pension Benefit Guaranty Corporation | OGC/GLOD | PBGC - 06 | AI-Assisted Recognition and Redaction for Legal/Litigation Matters | a) Pre-deployment – The use case is in a development or acquisition status. | Pre-deployment | Generative AI | Automation, Data Analysis Manual review and redaction of audio and images can be labor intensive and potentially error prone. | AI-Assisted review of audio, image, video, etc. to aid identification of personnel identifiable information (PII) in efforts to provide ethical, privacy protected evidence/information that is searchable and review- ready of legal personnel. | Automation, Data Analysis Manual review and redaction of audio and images can be labor intensive and potentially error prone. | AI-Assisted review of audio, image, video, etc. to aid identification of personnel identifiable information (PII) in efforts to provide ethical, privacy protected evidence/information that is searchable and review- ready of legal personnel. | ||||||||||||||||||||||||
| Pension Benefit Guaranty Corporation | ITIOD/Enterprise | PBGC -07 | IT Service Management Virtual Agent and Automation | a) Pre-deployment – The use case is in a development or acquisition status. | Pre-deployment | Generative AI | Automation, Content Generation Information Technology (IT) incident management is a continuous and at times repetitive, resource-consuming task. | Automation of routine tasks/IT incident response, providing expanded self service, and provided enhanced data-driven support in efforts to focus human agents to address more complex issues, provide more efficient IT support, enhancing overall productivity and satisfaction, and to provide a more streamline IT incident management capability resulting in potential cost savings/avoidance. | Automation, Content Generation Information Technology (IT) incident management is a continuous and at times repetitive, resource-consuming task. | Automation of routine tasks/IT incident response, providing expanded self service, and provided enhanced data-driven support in efforts to focus human agents to address more complex issues, provide more efficient IT support, enhancing overall productivity and satisfaction, and to provide a more streamline IT incident management capability resulting in potential cost savings/avoidance. | ||||||||||||||||||||||||
| Pension Benefit Guaranty Corporation | Enterprise | PBGC - 08 | AI-Powered General Productivity | a) Pre-deployment – The use case is in a development or acquisition status. | Pre-deployment | Generative AI | Research, Automation, Content Generation Manual processes, research, data analysis, etc. can be inefficient and time consuming, resulting in resource suboptimization and productivity losses. | Automation of routine tasks, deeper more effective research, more in-depth and efficient data and information analysis, and support of content creation – to enhance overall productivity, freeing up personnel to address more meaningful, critical work. | Research, Automation, Content Generation Manual processes, research, data analysis, etc. can be inefficient and time consuming, resulting in resource suboptimization and productivity losses. | Automation of routine tasks, deeper more effective research, more in-depth and efficient data and information analysis, and support of content creation – to enhance overall productivity, freeing up personnel to address more meaningful, critical work. | ||||||||||||||||||||||||
| Pension Benefit Guaranty Corporation | OBA | PBGC - 09 | Pension Plan Summary Report Assistance | a) Pre-deployment – The use case is in a development or acquisition status. | Pre-deployment | Generative AI | Automation, Research, Data Analysis Complex, cumbersome, and often labor intensive/long time duration to evaluate large documents and data stores | Summarization of Pension Plan information to serve as a context for summary report drafting. Recognition, potential conversion of unstructured data into machine readable data, evaluation/calculation, etc. of Pension Plan Data. | Automation, Research, Data Analysis Complex, cumbersome, and often labor intensive/long time duration to evaluate large documents and data stores | Summarization of Pension Plan information to serve as a context for summary report drafting. Recognition, potential conversion of unstructured data into machine readable data, evaluation/calculation, etc. of Pension Plan Data. | ||||||||||||||||||||||||
| Pension Benefit Guaranty Corporation | OCFO | PBGC - 10 | Federal Accounting Statutes and Guidelines Alignment | a) Pre-deployment – The use case is in a development or acquisition status. | Pre-deployment | Generative AI | Research, Data Analysis Alignment to Federal Executive Orders, Memorandums, Guidelines etc. to PBGC policies can be intricate, multi-faceted, and may need many sources at once to fully address Federal Financial Management requirements. | Dynamic, efficient evaluation resulting in summary information for PBGC CFO-Federal Staff to quickly and effectively address updates to Federal guidelines. | Research, Data Analysis Alignment to Federal Executive Orders, Memorandums, Guidelines etc. to PBGC policies can be intricate, multi-faceted, and may need many sources at once to fully address Federal Financial Management requirements. | Dynamic, efficient evaluation resulting in summary information for PBGC CFO-Federal Staff to quickly and effectively address updates to Federal guidelines. | ||||||||||||||||||||||||
| Pension Benefit Guaranty Corporation | OIT/Privacy | PBGC - 11 | IT Security and Privacy Control Assessment | a) Pre-deployment – The use case is in a development or acquisition status. | Pre-deployment | Generative AI | Research, Data Analysis, Content Creation Referencing Federal Cybersecurity and Privacy guidelines, evaluation of controls, interruption of supporting evidence, and generation of meaning outputs is a constant/fluid initiative that requires substantial resource allocation and time. | Increase volume of control assessment, reduce manual labor, automate findings generation in efforts – to reduce control tailoring process time, reduce control implementation time, automate selection of application controls, and support rapid drafting of implementation statements. | Research, Data Analysis, Content Creation Referencing Federal Cybersecurity and Privacy guidelines, evaluation of controls, interruption of supporting evidence, and generation of meaning outputs is a constant/fluid initiative that requires substantial resource allocation and time. | Increase volume of control assessment, reduce manual labor, automate findings generation in efforts – to reduce control tailoring process time, reduce control implementation time, automate selection of application controls, and support rapid drafting of implementation statements. | ||||||||||||||||||||||||
| Pension Benefit Guaranty Corporation | OMA/HRD | PBGC -12 | Position Description Analysis and Consolidation | a) Pre-deployment – The use case is in a development or acquisition status. | Pre-deployment | Generative AI | Research, Data Analysis Repository of current Position Descriptions may have redundant and/or overly specific distinctions of various positions. | Potential reduction of total number of Position Descriptions to better organize and accomplish workforce management and allocation. | Research, Data Analysis Repository of current Position Descriptions may have redundant and/or overly specific distinctions of various positions. | Potential reduction of total number of Position Descriptions to better organize and accomplish workforce management and allocation. | ||||||||||||||||||||||||
| Pension Benefit Guaranty Corporation | ONR | PBGC -13 | Analysis of Plan Benefits for Standard Terminations | a) Pre-deployment – The use case is in a development or acquisition status. | Pre-deployment | Generative AI | Research, Data Analysis Complex and often intricate plan benefits review and analysis. | Analysis of pension plan documentation for liability and standard termination preparation. | Research, Data Analysis Complex and often intricate plan benefits review and analysis. | Analysis of pension plan documentation for liability and standard termination preparation. | ||||||||||||||||||||||||
| Pension Benefit Guaranty Corporation | OPEA / PRAD | PBGC - 14 | Legislative and Regulatory Analysis / Automation (Actuarial/Financial) | a) Pre-deployment – The use case is in a development or acquisition status. | Pre-deployment | Generative AI | Automation, Research, Data Analysis Collection, review, and evaluation of large unstructured data stores/sources | More automated review/evaluation of unstructured data, allowing for augmented technology and human analysis, and aid in the generate of actuarial/financial projections and reports | Automation, Research, Data Analysis Collection, review, and evaluation of large unstructured data stores/sources | More automated review/evaluation of unstructured data, allowing for augmented technology and human analysis, and aid in the generate of actuarial/financial projections and reports | ||||||||||||||||||||||||
| Pension Benefit Guaranty Corporation | OPEA / PRAD | PBGC - 15 | Legislative and Regulatory Analysis / Automation (Actuarial/Financial) | a) Pre-deployment – The use case is in a development or acquisition status. | Pre-deployment | Generative AI | Coding Complex code management | Projection modeling code creation, enhancement, and management; to potentially increase speed and accuracy of stochastic modeling | Coding Complex code management | Projection modeling code creation, enhancement, and management; to potentially increase speed and accuracy of stochastic modeling | ||||||||||||||||||||||||
| Pension Benefit Guaranty Corporation | OPEA / COLAD | PBGC - 016 | Media Content Generation | a) Pre-deployment – The use case is in a development or acquisition status. | Pre-deployment | Generative AI | Content Creation Media Content ideation and drafting can be labor intensive | Generation of draft news releases, talking points, speeches, testimony, articles, photo captions and headlines Generation of images, videos, music, stock photos, and infographics for campaigns, stories, events, and articles To better serve and inform internal and external audiences | Content Creation Media Content ideation and drafting can be labor intensive | Generation of draft news releases, talking points, speeches, testimony, articles, photo captions and headlines Generation of images, videos, music, stock photos, and infographics for campaigns, stories, events, and articles To better serve and inform internal and external audiences | ||||||||||||||||||||||||
| Pension Benefit Guaranty Corporation | ONR | PBGC - 017 | AI-Assisted Market Financial Health and Credit Analysis | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Deployed | Generative AI | Research, Content Generation Monitoring and documenting the financial health and credit ratings of thousands of distressed companies is a complex and information-dense analysis process. | AI-Assisted financial health and credit analysis; to include the extraction of key insights from relevant articles, earnings reports, contractual/covenant analysis, and even private credit content to improve discovery, provide more robust information/research, and improve the productivity of financial analysts. | Research, Content Generation Monitoring and documenting the financial health and credit ratings of thousands of distressed companies is a complex and information-dense analysis process. | AI-Assisted financial health and credit analysis; to include the extraction of key insights from relevant articles, earnings reports, contractual/covenant analysis, and even private credit content to improve discovery, provide more robust information/research, and improve the productivity of financial analysts. | ||||||||||||||||||||||||
| Securities And Exchange Commission | Office of Information Technology (OIT) | SEC-1 | Comment Letter Review and Analysis | d) Retired ¤ The use case was reported in the agency¡s prior year¡s inventory, but its development and/or use has since been discontinued. | [blank] | Retired | c) Not high-impact | Not high-impact | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | ||
| Securities And Exchange Commission | Division of Investment Management (IM) | SEC-2 | Forms Filed | d) Retired ¤ The use case was reported in the agency¡s prior year¡s inventory, but its development and/or use has since been discontinued. | [blank] | Retired | c) Not high-impact | Not high-impact | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Investment Management (IM), Disclosure Review and Accounting Office (DRAO) | SEC-3 | Tailored Shareholder Report (TSR) | d) Retired ¤ The use case was reported in the agency¡s prior year¡s inventory, but its development and/or use has since been discontinued. | [blank] | Retired | c) Not high-impact | Not high-impact | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Corporate Finance (CF) | SEC-5 | 8-K Filing Solution | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | [blank] | Computer Vision | Business users now can quickly go through the details to make a informed decisions using AWS image recognition AI service | Searches and extracts information from certain 8-K filings (HTML files and images). | Extracts of filing data. | 25/03/2026 | b) Developed in-house | No | Extracts of filing data. | 8-K Filings | No | k) None of the above | Yes | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | ||||
| Securities And Exchange Commission | Office of Human Resources (OHR) | SEC-6 | Training Conversation Tool | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | [blank] | Natural Language Processing (NLP) | Strictly a training tool for practicing conversations on a variety of topic areas. | Improves communication and collaboration skills of the SEC workforce | Feedback on collaboration and communication. | 24/12/2026 | a)?Purchased from a vendor | SkillSoft | No | Feedback on collaboration and communication. | This is a training sandbox only. | No | k) None of the above | No | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |||
| Securities And Exchange Commission | Office of Information Technology (OIT) | SEC-7 | Mobile Phone Artificial Intelligence Features | d) Retired ¤ The use case was reported in the agency¡s prior year¡s inventory, but its development and/or use has since been discontinued. | [blank] | Retired | c) Not high-impact | Not high-impact | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Corporate Finance (CF) | SEC-9 | CF Staff Review Comments | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Classical/Predictive Machine Learning | The solution will help in classifying comments letters. | Classifies staff comments and improve CF's review process. | SEC staff comment categories | [blank] | [blank] | [blank] | SEC staff comment categories | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Office of Information Technology (OIT) | SEC-10 | EDGAR Filing Certification Review Solution | d) Retired ¤ The use case was reported in the agency¡s prior year¡s inventory, but its development and/or use has since been discontinued. | [blank] | Retired | c) Not high-impact | Not high-impact | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Office of Information Technology (OIT) | SEC-11 | Evaluating Cybersecurity Alerts | b) Pilot ¤ The use case has been deployed in a limited test or pilot capacity. | Cybersecurity | Pilot | c) Not high-impact | Not high-impact | [blank] | Agentic AI | Explore different mechanisms leveraging AI to inform and expedite cybersecurity processes. | Enhances identification of anomalous cybersecurity alerts for review. Increases the effectiveness and efficiency of limited cybersecurity resources. | Cybersecurity analysis reports and initial assessments of activity. | 25/08/2026 | b) Developed in-house | [blank] | No | Cybersecurity analysis reports and initial assessments of activity. | Cybersecurity alerts | [blank] | No | [blank] | k) None of the above | Yes | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] |
| Securities And Exchange Commission | Office of Information Technology (OIT) | SEC-12 | EDGAR Filing Audit Report Solution | d) Retired ¤ The use case was reported in the agency¡s prior year¡s inventory, but its development and/or use has since been discontinued. | [blank] | Retired | c) Not high-impact | Not high-impact | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | EDGAR Business Office (EBO) | SEC-14 | EDGAR Technical Support Public Chatbot | d) Retired ¤ The use case was reported in the agency¡s prior year¡s inventory, but its development and/or use has since been discontinued. | [blank] | Retired | c) Not high-impact | Not high-impact | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Office of Information Technology (OIT) | SEC-15 | EDGAR Filing Signature Review Solution | d) Retired ¤ The use case was reported in the agency¡s prior year¡s inventory, but its development and/or use has since been discontinued. | [blank] | Retired | c) Not high-impact | Not high-impact | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Office of Information Technology (OIT) | SEC-16 | EDGAR Filing Risk Disclosure Review Solution | d) Retired ¤ The use case was reported in the agency¡s prior year¡s inventory, but its development and/or use has since been discontinued. | [blank] | Retired | c) Not high-impact | Not high-impact | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Strategic Hub for Innocation and Financial Technology (FinHub) | SEC-17 | Testing Developer Workflows | d) Retired ¤ The use case was reported in the agency¡s prior year¡s inventory, but its development and/or use has since been discontinued. | [blank] | Retired | c) Not high-impact | Not high-impact | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Examinations (EXAMS) | SEC-19 | Name Matching | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | [blank] | Classical/Predictive Machine Learning | Name Matching - Identifies certain similarities in names across data sources. This allows staff to more easily compare information across data sources. | Identifies certain similarities in names across data sources. This allows staff to more easily compare information across data sources. | The system outputs a number which measures how closely names match. | 18/02/2026 | c) Developed with both contracting and in-house resources | Aretec Inc | Yes | The system outputs a number which measures how closely names match. | Registrant Data | [blank] | Yes | https://www.sec.gov/files/pia-national-exam-analytics-tool.pdf | k) None of the above | Yes | [blank] | [blank] | https://www.sec.gov/files/pia-national-exam-analytics-tool.pdf | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] |
| Securities And Exchange Commission | Division of Examinations (EXAMS) | SEC-20 | Parsing of Plain Language Descriptions to Machine-Readable Notations | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | [blank] | Classical/Predictive Machine Learning | Automates conversion of plain-language text into machine-readable standardized notations. This permits ease of analysis | Automates conversion of plain-language text into machine-readable standardized notations. This permits ease of analysis. | The system outputs machine-readable notations. | 22/02/2026 | c) Developed with both contracting and in-house resources | Aretec Inc | Yes | The system outputs machine-readable notations. | Registrant Trade Data | [blank] | Yes | https://www.sec.gov/files/pia-national-exam-analytics-tool.pdf | k) None of the above | Yes | [blank] | [blank] | https://www.sec.gov/files/pia-national-exam-analytics-tool.pdf | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] |
| Securities And Exchange Commission | Division of Examinations (EXAMS) | SEC-21 | Identification of Potentially Manipulative Trading Activity in Registrant Data | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Classical/Predictive Machine Learning | Identifies potentially manipulative trading activity for further review by an examination team. | Identifies potentially manipulative trading activity for further review by an examination team. | The system identifies activity of interest. | [blank] | [blank] | [blank] | The system identifies activity of interest. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Examinations (EXAMS) | SEC-22 | Testing of Various AI Algorithms and Models | d) Retired ¤ The use case was reported in the agency¡s prior year¡s inventory, but its development and/or use has since been discontinued. | [blank] | Retired | c) Not high-impact | Not high-impact | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Examinations (EXAMS) | SEC-23 | Using Natural Language Processing Techniques to Search and Analyze Certain Filings | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | [blank] | Natural Language Processing (NLP) | Helps search and analyze a corpus of unstructured filings. Staff may consider the results of these efforts in their examination efforts. | Helps search and analyze a corpus of unstructured filings. Staff may consider the results of these efforts in their examination efforts. | This system allows for search and analysis of certain filings. | 20/04/2026 | c) Developed with both contracting and in-house resources | Aretec | Yes | This system allows for search and analysis of certain filings. | Registrant Filings | No | k) None of the above | Yes | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |||
| Securities And Exchange Commission | Division of Examinations (EXAMS) | SEC-24 | Using Machine Learning/Artificial Intelligence Techniques to Predict Entities With Certain Risk Characteristics | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | [blank] | Classical/Predictive Machine Learning | Analyze data to predict entities with certain risk characteristics. Staff may consider the results of these efforts in their examination efforts. | Analyze data to predict entities with certain risk characteristics. Staff may consider the results of these efforts in their examination efforts. | This system provides predictive information. | 15/04/2026 | c) Developed with both contracting and in-house resources | IBM | Yes | This system provides predictive information. | Filings and Examination Data | No | k) None of the above | Yes | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |||
| Securities And Exchange Commission | Division of Examinations (EXAMS), Division of Economic and Risk Assessment (DERA), Office of the Chief Data Officer (OCDO) | SEC-25 | Developed Parsing Logic for Filings | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | [blank] | Reinforcement Learning | Help staff more efficiently search and analyze disclosure text contained in various filings. | Help staff more efficiently search and analyze disclosure text contained in various filings. | These efforts will help to search and analyze certain filings. | 24/07/2026 | b) Developed in-house | Yes | These efforts will help to search and analyze certain filings. | 10K and ADV Filings | No | k) None of the above | Yes | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | ||||
| Securities And Exchange Commission | Division of Examinations (EXAMS) | SEC-26 | Identification of Potentially Manipulative Activity in Certain Accounts | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identifies accounts with certain activity of interest for further review by an examination team | Identifies accounts with certain activity of interest for further review by an examination team. | The system identifies activity of interest. | 24/05/2026 | c) Developed with both contracting and in-house resources | Aretec Inc | Yes | The system identifies activity of interest. | Registrant Trade and MIDAS Market Data | Yes | https://www.sec.gov/files/pia-national-exam-analytics-tool.pdf | k) None of the above | Yes | [blank] | [blank] | https://www.sec.gov/files/pia-national-exam-analytics-tool.pdf | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | ||
| Securities And Exchange Commission | Division of Enforcement (ENF); Division of Economic and Risk Analysis (DERA) | SEC-27 | Using AI to Help with Coding-Related Tasks/Questions | d) Retired ¤ The use case was reported in the agency¡s prior year¡s inventory, but its development and/or use has since been discontinued. | [blank] | Retired | c) Not high-impact | Not high-impact | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Enforcement (ENF) | SEC-33 | Optical Character Recognition (OCR) for Records Analysis | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | [blank] | Computer Vision | Extract structured data from unstructured digital image files of checks. | Assists ENF staff in extracting relevant information from digital image files. This will accelerate associated ENF investigations. | A text string of all text recognized from the associated image. | 22/05/2026 | c) Developed with both contracting and in-house resources | Aretec | Yes | A text string of all text recognized from the associated image. | Evaluated using image files from ENF investigations. | Yes | [blank] | k) None of the above | No | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Enforcement (ENF) | SEC-34 | Single Event Insider Trading Analysis | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | [blank] | Classical/Predictive Machine Learning | Assist ENF staff in identifying accounts with trading activity in advance of material equity price moves that warrant further investigation by the ENF staff. | Assist ENF staff in identifying accounts with trading activity in advance of material equity price moves that warrant further investigation by the ENF staff. | Leads worthy of consideration for additional investigation. | 18/04/2026 | c) Developed with both contracting and in-house resources | Aretec | Yes | Leads worthy of consideration for additional investigation. | Records of filed SEC enforcement matters and trading data | Yes | [blank] | k) None of the above | Yes | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Enforcement (ENF) | SEC-35 | Utilize Westlaw Quick Check to cross check SEC documents. | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | [blank] | Generative AI | Utilize existing Westlaw product to cross check SEC documents. | Review SEC documents for additional case information. | A report is created with a list of cases. | 25/01/2026 | a)?Purchased from a vendor | Westlaw | [blank] | A report is created with a list of cases. | [blank] | [blank] | [blank] | [blank] | k) None of the above | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] |
| Securities And Exchange Commission | Office of Information Technology (OIT) | SEC-36 | Augmenting cyber threat alerts on the ACES platform with ACES's specific information. | b) Pilot ¤ The use case has been deployed in a limited test or pilot capacity. | Cybersecurity | Pilot | c) Not high-impact | Not high-impact | [blank] | Generative AI | The current Cyberthreat alerts on the ACES platform are difficult to understand and we often see false positives relative to new use cases. | The current Cyberthreat alerts on the ACES platform are difficult to understand and we often see false positives relative to new use cases. We would like to run the alerts through a RAG-enabled AI model to: Add SEC specific guidance and context to the emails based on a curated knowledge base. Provide more guidance on how to respond to the alert. Make the notification email easier to read and understand. | The AI system will output the cyberthreat alert along with information on the context of the threat in the SEC environment. It will summarize the information and provide suggested guidance in a more readable format. | b) Developed in-house | [blank] | Yes | The AI system will output the cyberthreat alert along with information on the context of the threat in the SEC environment. It will summarize the information and provide suggested guidance in a more readable format. | The ACES team will develop a knowledge base using existing documentation on the workings of the ACES environment in the SEC in order to provide the RAG implementation. | [blank] | [blank] | [blank] | k) None of the above | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Enforcement (ENF) | SEC-37 | Large Language Model analysis for the Tips, Complaints and Referrals (TCR) system | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Generative AI | The SEC receives high volumes of Tips, Complaints, and Referrals (TCRs) and has limited staff to triage these TCRs effectively. Some TCRs contain large narratives for allegations; AI tools can help summarize submitted information and intelligence for more efficient review by triage staff. | Large Language Model analysis will be leveraged to more efficiently review submissions to the SEC's Tips, Complaints and Referrals system. | Output will include various classification, analysis, trending and other analysis of Tips, Complaints and Referral system data. | [blank] | [blank] | [blank] | Output will include various classification, analysis, trending and other analysis of Tips, Complaints and Referral system data. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Enforcement (ENF) | SEC-38 | Using AI Large Language Model chat for general tasks/questions | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | [blank] | Generative AI | Using an AI Large Language Model chat for general tasks and answer questions | This AI service will provide a internal Large Language Model for general usage. | Chat response is in direct response to users individual request. | 24/04/2026 | c) Developed with both contracting and in-house resources | Aretec, Elder Research | Yes | Chat response is in direct response to users individual request. | No training | No | [blank] | k) None of the above | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Office of Information Technology (OIT) | SEC-39 | Agentic Knowledge Base of Amazon Web Services Cloud Environment for the Securities and Exchange Commission (ACES) | b) Pilot ¤ The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | [blank] | Generative AI | Retrieving and Summarizing support FAQ information | The AI system will look for relevant historical FAQs and Compliance Best Practices to present to Cloud Engineers using the ACES platform | The AI system will look for relevant historical FAQs and Compliance Best Practices to present to Cloud Engineers using the ACES platform | b) Developed in-house | Yes | The AI system will look for relevant historical FAQs and Compliance Best Practices to present to Cloud Engineers using the ACES platform | No training | No | [blank] | k) None of the above | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |||
| Securities And Exchange Commission | Office of Information Technology (OIT) | SEC-40 | Extracting data from FOIA request forms | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | [blank] | Generative AI | Capture and load data from a FOIA form into a SharePoint list for tracking through the eDiscovery Lifecycle. This alleviates time demands for manual entry from one system to another. | To capture and load data from a FOIA form into a SharePoint list for tracking through the eDiscovery Lifecycle. | Extracts data from Microsoft word document to a SharePoint. | 25/03/2026 | c) Developed with both contracting and in-house resources | GDIT | No | Extracts data from Microsoft word document to a SharePoint. | We used the FOIA forms submitted to eDiscovery to train the model. | No | [blank] | k) None of the above | No | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Investment Management (IM) | SEC-41 | Knowledge base of federal securities law and chatbot | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Generative AI | Using AI to carry out basic research on federal securities laws. | Provide Staff with a simple internal chat bot to help find relevant documents referencing IM-specific curated content. | Summary and links to relevant snippets from documents from IM-curated library. | [blank] | [blank] | [blank] | Summary and links to relevant snippets from documents from IM-curated library. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Investment Management (IM) | SEC-42 | Knowledge base of crypto comments and chatbot | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Generative AI | Using AI to carry out basic research on opinions and submitted comments related to crypto assets. | Provide Staff with a simple internal chat bot to identify relevant documents referencing content curated by IM. To be used as a supplement to research. | Summary and links to relevant snippets from documents from IM-curated library. | [blank] | [blank] | [blank] | Summary and links to relevant snippets from documents from IM-curated library. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Office of the Chief Operating Officer (OCOO) | SEC-43 | Create description of documents for records management in sharepoint | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Generative AI | Creating descriptions and other record keeping metadata on documents. | Reduce the amount of time manually writing descriptions of documents. | The AI system creates metadata for documents stored in SharePoint. | [blank] | [blank] | [blank] | The AI system creates metadata for documents stored in SharePoint. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Enforcement (ENF), Division of Examinations (EXAMS), Office of the Chief Data Officer (OCDO), Office of the General Counsel (OGC) | SEC-44 | Westlaw Co-Counsel | b) Pilot ¤ The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | [blank] | Generative AI | The AI output will facilitate legal research, document analysis, and document production. | More efficient legal research and analysis, document review, and document production. | Document summaries, document drafts, workflow analysis, and answers to user-selected questions | 25/10/2026 | a)?Purchased from a vendor | Thomson Reuters | [blank] | Document summaries, document drafts, workflow analysis, and answers to user-selected questions | For purposes of the pilot, the datasets will be research datasets that are otherwise publicly available (court filings). | [blank] | No | [blank] | k) None of the above | No | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] |
| Securities And Exchange Commission | Division of Enforcement (ENF), Division of Examinations (EXAMS), Office of the Chief Data Officer (OCDO), Office of the General Counsel (OGC) | SEC-45 | Lexis Protege | b) Pilot ¤ The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | [blank] | Generative AI | The AI output will facilitate legal research, document analysis, and document production. | More efficient legal research and analysis, document review, and document production. | Document summaries, document drafts, workflow analysis, and answers to user-selected questions. | 25/10/2026 | a)?Purchased from a vendor | RELX | No | Document summaries, document drafts, workflow analysis, and answers to user-selected questions. | For the pilot, the datasets will be publicly available research datasets (court filings) | [blank] | No | [blank] | k) None of the above | No | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] |
| Securities And Exchange Commission | Division of Economic and Risk Assessment (DERA), Office of the Chief Data Officer (OCDO) | SEC-46 | Semantic Layer Natural Language Query | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Generative AI | This use case prepares data for use by AI and then enables users to query data with AI without having to write code. | Make agency enterprise data more accessible to business users for mission purposes. The analytical power of insights drawn from enterprise data will be enhanced by AI. | The system outputs data, SQL code, and narrative about how the data and code was generated. It also provides metadata including data dictionary, table descriptions, entity-relationship diagrams, and more. | [blank] | [blank] | [blank] | The system outputs data, SQL code, and narrative about how the data and code was generated. It also provides metadata including data dictionary, table descriptions, entity-relationship diagrams, and more. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Enforcement (ENF), Division of Economic and Risk Assessment (DERA) | SEC-47 | Using AI to answer questions about a corpus of uploaded files | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | [blank] | Generative AI | This AI service will provide a internal Large Language Model for general usage for chatting and answering questions about user uploaded files and internal knowledge bases. | Cost and time savings by reducing Staff's time in reviewing a large number of documents. | Chat response is in direct response to users individual request. | 25/05/2026 | c) Developed with both contracting and in-house resources | Elder Research | Yes | Chat response is in direct response to users individual request. | We used public 10-K and ADV filings for evaluation of models on Question & Answering tasks, and information extraction tasks. | No | k) None of the above | Yes | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |||
| Securities And Exchange Commission | Division of Economic and Risk Assessment (DERA), Office of the Chief Data Officer (OCDO) | SEC-48 | AI Chart Generation | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Generative AI | Our solution uses AI to generate chart code to automatically render interactive data visualizations for various chart types. | This will give Staff new data visualization capabilities as well as save Staff time by not having to research how to create various chart types and features. This could also enhance the quality of SEC published reports and data visualization content on sec.gov. | Output primarily consists of chart related json objects formatted according to the Highcharts.js API | [blank] | [blank] | [blank] | Output primarily consists of chart related json objects formatted according to the Highcharts.js API | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Enforcement (ENF), Division of Economic and Risk Assessment (DERA) | SEC-49 | Creating AI generated images | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | [blank] | Generative AI | Facilitate the use of models for AI generated image creation | Cost savings, efficient use of time for creating stock images for presentations, keeping up with state-of-the-art technologies. | Image response is in direct response to users individual request. | 25/03/2026 | c) Developed with both contracting and in-house resources | Elder Research | Yes | Image response is in direct response to users individual request. | No SEC data are used to train or fine-tune. This application uses a model from AWS as-is. | No | k) None of the above | Yes | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |||
| Securities And Exchange Commission | Division of Corporation Finance (CF) | SEC-50 | Disclosure Review Chatbot | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Generative AI | Disclosure Review Automation (DREAM) Chatbot helps division staff search within review guides and generating detailed responses to disclosure review questions | The DREAM chatbot is designed to support SEC staff in the Corporation Finance division by delivering fast, accurate, and context-aware responses to disclosure-related questions. This chatbot delivers significant time savings for staff in the disclosure review program. | Real-time answers to staff questions about disclosure rules, filing procedures, and review protocols. Contextual guidance based on historical filings, internal policies, and SEC regulations. | [blank] | [blank] | [blank] | Real-time answers to staff questions about disclosure rules, filing procedures, and review protocols. Contextual guidance based on historical filings, internal policies, and SEC regulations. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Examinations (EXAMS) | SEC-51 | Note-Taking Assistant | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Generative AI | The application assists staff with review and analysis of the examiner's notes taken during the examination process. | This application will provide consistent and thorough review of examiner's notes to improve risk analysis during examinations. | This application provides a summary and analysis of an examiner's notes. | [blank] | [blank] | [blank] | This application provides a summary and analysis of an examiner's notes. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Examinations (EXAMS) | SEC-52 | Public Website Search | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Generative AI | This application summarizes a public website and allows staff to ask natural language questions about the contents of the website. | This application will streamline analysis of public websites which will improve efficiency and risk analysis during examinations. | This application provides summarizations and responses to questions based on the contents of a public website. | [blank] | [blank] | [blank] | This application provides summarizations and responses to questions based on the contents of a public website. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Examinations (EXAMS) | SEC-53 | Chatbot with SEC Documents | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Generative AI | This application includes a knowledge base of internal SEC guidelines and procedures. It will help answer staff questions on those guidelines/procedures. | This application will streamline responding to questions regarding internal SEC guidance. It will improve efficiency and consistency of SEC work through near immediate response time to questions. | This application provides summarizations and responses to questions based on the contents of a corpus of documents and internal SEC guidelines.. | [blank] | [blank] | [blank] | This application provides summarizations and responses to questions based on the contents of a corpus of documents and internal SEC guidelines.. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Office of Information Technology (OIT), Office of the Chief Data Officer (OCDO), Office of the Advocate for Small Business Capital Formation (OASB) | SEC-54 | Enhancing Analysis of SEC Web Engagement Trends Using AI | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Classical/Predictive Machine Learning | Manual analysis of Google Analytics data is time-consuming and limits the ability to identify trends across multiple SEC.gov pages. | Faster, more accurate insights into how the public engages with SEC content, enabling more effective outreach and communication strategies. | Dashboards or reports summarizing traffic sources, engagement trends, and content performance. | [blank] | [blank] | [blank] | Dashboards or reports summarizing traffic sources, engagement trends, and content performance. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Office of the Chief Data Officer (OCDO), Office of Information Technology (OIT), Office of the Advocate for Small Business Capital Formation (OASB) | SEC-55 | Identifying Citations of SEC Publications to Measure Market Impact | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Classical/Predictive Machine Learning | Lack of visibility into how SEC publications are cited in external research and industry materials. | Improved understanding of the reach and impact of SEC publications, supporting transparency and accountability. | Citation reports and trend summaries showing where and how SEC materials are referenced. | [blank] | [blank] | [blank] | Citation reports and trend summaries showing where and how SEC materials are referenced. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Office of Information Technology (OIT), Office of the Chief Data Officer (OCDO), Office of the Advocate for Small Business Capital Formation (OASB) | SEC-56 | Analyzing Registered Offerings Data to Understand Capital Formation Trends | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Classical/Predictive Machine Learning | Manual review and analysis of data filed with the SEC about registered offerings is time-consuming. | Faster, more accurate insights into registrants are raising capital from investors through registered offerings. | Dashboards or reports summarizing data filed with the SEC about registered offering. | [blank] | [blank] | [blank] | Dashboards or reports summarizing data filed with the SEC about registered offering. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Office of the Chief Data Officer (OCDO), Office of Information Technology (OIT), Office of the Advocate for Small Business Capital Formation (OASB) | SEC-57 | Analyzing EDGAR Search Trends to Understand Public Interest in Filings | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Classical/Predictive Machine Learning | Limited insight into how the public interacts with EDGAR and what types of filings are most viewed. | Better understanding of public interest in SEC filings, supporting transparency and user-centered improvements. | Reports and dashboards showing search frequency by company, filing type, and issuer category. | [blank] | [blank] | [blank] | Reports and dashboards showing search frequency by company, filing type, and issuer category. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Office of the Chief Data Officer (OCDO), Office of Information Technology (OIT), Office of the Advocate for Small Business Capital Formation (OASB) | SEC-58 | Analyzing Form ADV Data to Identify Trends Among Investment Advisers | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Classical/Predictive Machine Learning | Manual analysis of Form ADV data is inefficient and limits trend detection. | Improved oversight and understanding of adviser trends, affiliations, and private fund reporting. | Trend reports and visualizations based on structured ADV data. | [blank] | [blank] | [blank] | Trend reports and visualizations based on structured ADV data. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Division of Trading and Markets (TM), Office of the Chief Data Officer (OCDO), Division of Economic and Risk Assessment (DERA), Division of Investment Management (IM), Division of Corporation Finance (CF) | SEC-59 | Rule and Comment Organization and Summarization | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Generative AI | Downloading, navigating, searching, and understanding comments and rule filings can be slow and burdensome using existing tools. | The tool aims to ease these burdens of downloading, navigating, searching, and understanding comments and rule filing. In particular, AI is used to summarize individual comments, rule proposals, and overall subject matters to promote the understanding of vast quantities of unstructured data, i.e. text. | The tool generates summaries and summaries of summaries using LLMs. | [blank] | [blank] | [blank] | The tool generates summaries and summaries of summaries using LLMs. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Office of the Chief Data Officer (OCDO) | SEC-60 | AI-enabled information collection and curation | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Agentic AI | Collecting information from a variety of sources, including documents, databases, and staff interviews, and curating into a knowledge base. | Collecting and organizing information in support of various use cases. | AI-ready knowledge-bases of content in a variety of formats, tagged and organized. | [blank] | [blank] | [blank] | AI-ready knowledge-bases of content in a variety of formats, tagged and organized. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Office of the Chief Data Officer (OCDO) | SEC-61 | AI-enabled information dissemination | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Agentic AI | Process, organize, and propagate new information to staff and AI agents when it arrives. | Users and applications throughout the agency will benefit from better information dissemination processes. | Information will be sent along with descriptions on why the information is important and what to do with the information. | [blank] | [blank] | [blank] | Information will be sent along with descriptions on why the information is important and what to do with the information. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Office of the Chief Data Officer (OCDO) | SEC-62 | Legal Research (Lexis+) | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | [blank] | Generative AI | More efficient legal research. | Improved search results and extracted information to assist in legal research activities. | AI-enabled extractions of text and AI-enabled type-aheads in search bars | 20/01/2026 | a)?Purchased from a vendor | RELX (Lexis) | No | AI-enabled extractions of text and AI-enabled type-aheads in search bars | No data from the SEC; data is managed by RELX. | [blank] | No | [blank] | k) None of the above | No | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] |
| Securities And Exchange Commission | Office of the Chief Data Officer (OCDO) | SEC-63 | Legal Research (Westlaw Precision) | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | [blank] | Generative AI | More efficient legal research. | Improved search results and extracted information to assist in legal research activities. | AI-enabled extractions of text and AI-enabled type-aheads in search bars | 20/01/2026 | a)?Purchased from a vendor | West Publishing | No | AI-enabled extractions of text and AI-enabled type-aheads in search bars | No data from the SEC (all West Publishing material) | [blank] | No | [blank] | k) None of the above | No | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] |
| Securities And Exchange Commission | Division of Examinations (EXAMS), Division of Enforcement (ENF) | SEC-64 | Natural language search on images through FlashPoint | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | [blank] | Computer Vision | Using natural language to search through images posted to social media. | Help the agency find and identify relevant images when reviewing social media posts. | The Flash Point AI image search returns images from its database of social media posts relevant to a user's natural language query. | 25/04/2026 | a)?Purchased from a vendor | Flashpoint | No | The Flash Point AI image search returns images from its database of social media posts relevant to a user's natural language query. | No SEC data assets are used. Flashpoint uses images from social media it has collected. | [blank] | No | [blank] | k) None of the above | No | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] |
| Securities And Exchange Commission | Division of Enforcement (ENF), Division of Examinations (EXAMS) | SEC-65 | Streamlining initial research | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Generative AI | Initial research steps in examinations and enforcement matters requires marshalling many different data sets, for which AI can improve efficiency. | Expected benefits may include improving efficiency by automating previously manual steps, improving risk identification, and optimizing use of resources. | This tool will search, retrieve, organize and summarize information across multiple systems. | [blank] | [blank] | [blank] | This tool will search, retrieve, organize and summarize information across multiple systems. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Securities And Exchange Commission | Office of the Chief Data Officer (OCDO) | SEC-66 | Use natural language to query crypto market data and create on demand visualizations with prepopulated thresholds | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI simplifies the process of creating data visualization for crypto market data. | Simplifying crypto market data pulls and visualizations will help streamline the agency's review of crypto-related products. | Crypto market data visualizations which are used as part of a comprehensive review of crypto-related products. | [blank] | [blank] | [blank] | Crypto market data visualizations which are used as part of a comprehensive review of crypto-related products. | [blank] | [blank] | [blank] | [blank] | [blank] | v | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | ||
| Securities And Exchange Commission | Division of Enforcement (ENF), Division of Examinations (EXAMS) | SEC-67 | Tracing digital asset transactions and identifying entities controlling wallet addresses | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | [blank] | Classical/Predictive Machine Learning | The tool helps to identify clusters of related wallet addresses and to predict the sender¡s change in transaction data. | Staff has greater clarity of the wallet addresses controlled by certain entities and the movement of their digital assets. | Digital asset holdings and transfers are used to build an understanding of the activities of certain entities during examinations and investigations. | 25/12/2026 | a)?Purchased from a vendor | Chainalysis | No | Digital asset holdings and transfers are used to build an understanding of the activities of certain entities during examinations and investigations. | SEC provides wallet addresses to the model. | [blank] | No | k) None of the above | No | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | ||
| Securities And Exchange Commission | Office of the Secretary (OS) | SEC-68 | Improve efficiency of comment letter ingestion | c) Deployed ¤ The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | [blank] | Classical/Predictive Machine Learning | Using AI/ML to improve the timeliness and accuracy of the ingestion of comment letters, such as by automating the identification of comment file numbers, and enhancing the identification of spam, form letters, PII, profanity, and copyrighted material. 2. Increase the quality and amount of structured data (including metadata) related to individual comment letters to directly support downstream uses, including use by GenAI tools | Improve the timeliness and accuracy of the ingestion of comment letters, and increase the quality and amount of structured data (including metadata) related to individual comment letters to directly support downstream uses, including use by GenAI tools | Additional metadata and flags on incoming comment letters. | 25/04/2026 | c) Developed with both contracting and in-house resources | [blank] | Yes | Additional metadata and flags on incoming comment letters. | Comment Letters | [blank] | Yes | https://www.sec.gov/files/pia-comment-letter-log.pdf | k) None of the above | Yes | [blank] | [blank] | https://www.sec.gov/files/pia-comment-letter-log.pdf | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] |
| Securities And Exchange Commission | EDGAR Business Office (EBO) | SEC-69 | Comparing EDGAR system access request forms | a) Pre-deployment ¤ The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | [blank] | Computer Vision | The AI helps extract information from scanned documents and compares it with information provided on a digital form | This application will streamline the EDGAR system's registration review process | Verification that the scanned documents information matches the submitted digital forms. | [blank] | [blank] | [blank] | Verification that the scanned documents information matches the submitted digital forms. | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | [blank] | |
| Tennessee Valley Authority | Enteprise Planning | Extreme Load Analysis | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Supply Chain | Materials Intelligent Catalog Assistant | Energy & the | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Right of Way | Vegetation Management | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Enterprise Analytics & Innovation | CAP Automation | Energy & the | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Enterprise Analytics & Innovation | CR intelligent search | Energy & the | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Supply Chain | Materials tracking | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Enterprise Analytics & Innovation | Heliviewer | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Enterprise Analytics & Innovation | ICI camera | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Generation Projects & Fleet Strategy | GET | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Generation Projects & Fleet Strategy | GMET | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Commercial Energy Solutions | Optiwatt EV Load Model | Energy & the | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Commercial Energy Solutions | Cold-Weather Heat Pump Model | Energy & the | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Commercial Energy Solutions | EE/DR Smart Thermostat Impact Model | Energy | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Commercial Energy Solutions | EnergyRightPSPImpactsForecasting | Energy & | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Commercial Energy Solutions | CESASolarModel | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Commercial Energy Solutions | CESAFCAAllocator | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Commercial Energy Solutions | EnergyRightDERPortfolioEvaluation | Energy & | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Monitoring Diagnostics Management | Prism Models for MDM | Energy & the | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Anomaly detection | Environment | Neither | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Enterprise Planning | Load Forecasting | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | GIS | ArcGIS | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Network | Advanced Network Anomaly Detection | Information Technology | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Workforce Development | Natural Reader | Administrative Functions | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Riverfleet Management Services | River Forecast | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Network | Perimeter Defense | Information Technology | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | IT Operations | AI-Enhanced Search | Information Technology | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Transmission | Network Anomaly Detection | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Office of General Counsel | Westlaw | Other | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Information Technology | Copilot | Administrative Functions | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Information Technology | Github Copilot | Information Technology | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Transmission System Support | ESRI GIS AI | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Generation | SEEQ AI | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Increased efficiency and productivity | Information Technology | Neither | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Information Technology | Cisco ML | Cybersecurity | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Information Technology | Datum | Information Technology | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Safety & Shared Services | Transmission analytics | Energy & the | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Information Technology | AWS Q | Information Technology | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Information Technology | ChatSPP | Administrative Functions | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Generation Projects & Fleet Services | GET Enhancement | Energy & the | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Generation Projects & Fleet Services | GMET Enhancement | Energy & the | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Generation Projects & Fleet Services | GMET Multivariate analysis | Energy | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Nuclear | CAP Enhancement | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Information Technology | COBIT and ATAD Prediction | Information | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Increased efficiency and productivity | Technology | Neither | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Generation | Generation Project Funding Estimates | Energy & the | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Increased efficiency and productivity, reduced errors | Environment | Neither | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Information Technology | Land Management Records | Energy & the | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Information Technology | Environmental Documentation | Energy & the | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Increased efficiency and productivity | Environment | Neither | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Information Technology | Bill of Material | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Information Technology | TVA Today AI | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Information Technology | Non-NPG CAP | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | River Management | Azure FEWS CUA | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Nuclear | Azure Nuclear Permitting Solution | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Information Technology | AWS Transcribe | Human Resources | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Innovation & Research | Thinklabs | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Innovation & Research | Specification Intelligence | Energy & the | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Transmission | Trade Chat | Energy & the Environment | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Information Technology | Amazon Connect | Information Technology | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Information Technology | Enscape | Information Technology | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | Economic Development | Arkio | Information Technology | Unknown | ||||||||||||||||||||||||||||||
| Tennessee Valley Authority | External Communications | Talkwalker | Administrative Functions | Unknown | ||||||||||||||||||||||||||||||
| United States Trade And Development Agency | Microsoft Copilot | Unknown | ||||||||||||||||||||||||||||||||
| United States Trade And Development Agency | Crowdstrike Falcon Threat Detection and Response, Qualys Cloud Detection and Response | Unknown | ||||||||||||||||||||||||||||||||
| United States Trade And Development Agency | Apple iPhone, Lookout for mobile security | Unknown | ||||||||||||||||||||||||||||||||
| Agency | Bureau/Component | Use Case ID | Use Case Name | Stage of Development (Raw) | Use Case Topic Area | Stage of Development | Is the AI use case high-impact? (Raw) | High-impact? | Justification | AI Classification | What problem is the AI intended to solve? | What are the expected benefits and positive outcomes from the AI for an agency's mission and/or the general public? | Describe the AI system's outputs. | Date when AI use case became operational or the pilot's start date | Was the system involved in this use case purchased from a vendor or developed under contract(s) or in-house? | Vendor(s) Name | Does this AI use case have an associated Authorization to Operate (ATO)? | System(s) Name | Describe any data used to train, fine-tune, and/or evaluate performance of the model(s) used in this use case. | If the data is required to be publicly disclosed as an open government data asset, provide a link to the entry on the Federal Data Catalog. | Does this AI use case involve personally identifiable information (PII) that is maintained by the agency? | If publicly available, provide the link to the AI use case's associated Privacy Impact Assessment (PIA). | Which, if any, demographic variables does the AI use case explicitly use as model features? | Does this project include custom-developed code? | If the code is open source, provide the link for the publicly available source code. | Has pre-deployment testing been conducted for this AI use case? | Has an AI impact assessment been completed for this AI use case? | What are the potential impacts of using the AI for this particular use case and how were they identified? | Has an independent review of the AI use case been conducted? | Is there a process to conduct ongoing monitoring to identify any adverse impacts to the performance and security of the AI functionality, as well as to privacy, civil rights, and civil liberties? | Has the agency established sufficient and periodic training for operators of the AI to interpret and act on the its output and managed associated risks? | Does this AI use case have an appropriate fail-safe that minimizes the risk of significant harm? | Is there an established appeal process in the event that an impacted individual would like to appeal or contest the AI system's outcome? | What steps has the agency taken to consult and incorporate feedback from end users of this AI use case and the public? |
Agency:
Bureau/Component:
Use Case ID:
Use Case Name:
Use Case Topic Area:
Stage of Development:
High-impact?:
Justification:
AI Classification:
What problem is the AI intended to solve?:
What are the expected benefits and positive outcomes from the AI for an agency's mission and/or the general public?:
Date when AI use case became operational or the pilot's start date:
Was the system involved in this use case purchased from a vendor or developed under contract(s) or in-house?:
Vendor(s) Name:
Does this AI use case have an associated Authorization to Operate (ATO)?:
System(s) Name:
If the code is open source, provide the link for the publicly available source code.:
Has pre-deployment testing been conducted for this AI use case?:
Has an independent review of the AI use case been conducted?: