Federal AI Use Case Inventory
Enjoy exploring the Federal AI Use Case Inventory in our dynamic table.
| Agency | Bureau/Component | Use Case ID | Use Case Name | Stage of Development (Raw) | Use Case Topic Area | Stage of Development | Is the AI use case high-impact? (Raw) | High-impact? | Justification | AI Classification | What problem is the AI intended to solve? | What are the expected benefits and positive outcomes from the AI for an agency's mission and/or the general public? | Describe the AI system's outputs. | Date when AI use case became operational or the pilot's start date | Was the system involved in this use case purchased from a vendor or developed under contract(s) or in-house? | Vendor(s) Name | Does this AI use case have an associated Authorization to Operate (ATO)? | System(s) Name | Describe any data used to train, fine-tune, and/or evaluate performance of the model(s) used in this use case. | If the data is required to be publicly disclosed as an open government data asset, provide a link to the entry on the Federal Data Catalog. | Does this AI use case involve personally identifiable information (PII) that is maintained by the agency? | If publicly available, provide the link to the AI use case's associated Privacy Impact Assessment (PIA). | Which, if any, demographic variables does the AI use case explicitly use as model features? | Does this project include custom-developed code? | If the code is open source, provide the link for the publicly available source code. | Has pre-deployment testing been conducted for this AI use case? | Has an AI impact assessment been completed for this AI use case? | What are the potential impacts of using the AI for this particular use case and how were they identified? | Has an independent review of the AI use case been conducted? | Is there a process to conduct ongoing monitoring to identify any adverse impacts to the performance and security of the AI functionality, as well as to privacy, civil rights, and civil liberties? | Has the agency established sufficient and periodic training for operators of the AI to interpret and act on the its output and managed associated risks? | Does this AI use case have an appropriate fail-safe that minimizes the risk of significant harm? | Is there an established appeal process in the event that an impacted individual would like to appeal or contest the AI system's outcome? | What steps has the agency taken to consult and incorporate feedback from end users of this AI use case and the public? |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Commodity Futures Trading Commission | DOD | CFTC-001 | Anomaly Detection for Data Quality | Operation and Maintenance | Deployed | Anomaly detection model designed to identify potentially erroneous data loads in TCR data. Uses an isolation forest model with aggregated data and automatically runs daily. | ||||||||||||||||||||||||||||
| Commodity Futures Trading Commission | DCR | CFTC-003 | Stress Testing Scenarios with Deep Learning | Acquisition and/or Development | Pre-deployment | Pilot project to explore neural-network based machine learning methods for creating stress testing scenarios and estimating PnL (profit and loss) on FO (Futures and Options) portfolios based on the current/recent market states/conditions. | ||||||||||||||||||||||||||||
| Commodity Futures Trading Commission | MPD | CFTC-004 | MPD Entity Risk Modeling | Initiated | Pre-deployment | Entity-level risk modeling project. Use statistical and probabalistic models to predict firms experiencing changes in capital levels. Very early stage R&D effort. | ||||||||||||||||||||||||||||
| Department Of Agriculture | Administrative and Financial Management | USDA-001 | Repair Spend | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | The intended purpose of this model is to review financial documents and then classify each expense as money spent on "facility repairs" or "not facility repairs". The expected benefits include reduction of manual hours identifying the types of transactions. | The output of the model is a recommendation of which financial transactions should be identified as "repair" expenses. | 10/01/2019 | Developed with both contracting and in-house resources | The output of the model is a recommendation of which financial transactions should be identified as "repair" expenses. | None; | ||||||||||||||||||||
| Department Of Agriculture | Office of National Programs | USDA-002 | ARS Project Mapping | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Science & Space | Deployed | No | Not high-impact | The intended purpose of this model is to process research plans from various research program portfolios in the Agricultural Research Service (ARS) to find patterns and opportunities between projects. The expected benefits include decreasing the time that humans would spend to manually read, pull out key terms, and group the projects by topic. The model may also find patterns that a human might miss. | The model outputs groups of similar projects and project terms. The output includes metrics (silhouette scores, term rank, importance scores) that show how well the projects and terms in a group match. | 01/01/2020 | Developed with contracting resources | The model outputs groups of similar projects and project terms. The output includes metrics (silhouette scores, term rank, importance scores) that show how well the projects and terms in a group match. | None; | ||||||||||||||||||||
| Department Of Agriculture | National Agricultural Library | USDA-003 | NAL Automated Indexing | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Science & Space | Deployed | No | Not high-impact | This system automatically assigns word tags to agricultural research articles from a controlled list of terms provided by the National Agricultural Library Thesaurus (NALT). The tags can be used to look up and retrieve articles. Using these tags benefits users by making it easier to find the content they are looking for. | The model outputs terms to use as search tags that are specific to the article that the model analyzed. | 06/01/2011 | Developed with both contracting and in-house resources | The model outputs terms to use as search tags that are specific to the article that the model analyzed. | None; | ||||||||||||||||||||
| Department Of Agriculture | Plant Protection and Quarantine | USDA-004 | Predictive Modeling of Invasive Pest Species | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | The purpose of the model is to check how likely it is for imported agricultural products from other countries to have pests. Benefits include more reliable discovery and quarantine of invasive pests, preventing pest invasion and making trade safer. | The model outputs are a prediction of whether a product carries an invasive species and what invasive species category the pest belongs to. | 07/01/2015 | Developed in-house | The model outputs are a prediction of whether a product carries an invasive species and what invasive species category the pest belongs to. | None; | ||||||||||||||||||||
| Department Of Agriculture | Plant Protection and Quarantine | USDA-005 | Detection of Pre-symptomatic HLB Infected Citrus | Stage 5 - Retired (Use case has been retired or is in the process of being retired) | Science & Space | Retired | The purpose of the model is to detect citrus trees infected with Huanglongbing (HLB) disease using images collected by a camera sensor on a small drone. This system would decrease time and cost associated with manual searching for HLB infected trees. | The model outputs GPS Coordinates of potential Huanglongbing (HLB) infected areas. | The model outputs GPS Coordinates of potential Huanglongbing (HLB) infected areas. | |||||||||||||||||||||||||
| Department Of Agriculture | Plant Protection and Quarantine | USDA-006 | High Throughput Phenotyping in Citrus Orchards | Stage 5 - Retired (Use case has been retired or is in the process of being retired) | Science & Space | Retired | The main purpose of this system is to analyze drone images to locate, count, and categorize citrus trees in an orchard to monitor orchard health. This use case saves thousands of man-hours searching for signs of plant damage and disease in orchards. | The model output flags images containing plant damage or disease. | The model output flags images containing plant damage or disease. | |||||||||||||||||||||||||
| Department Of Agriculture | Plant Protection and Quarantine | USDA-007 | Detection of Aquatic Weeds | Stage 5 - Retired (Use case has been retired or is in the process of being retired) | Science & Space | Retired | The purpose of this system is to locate and identify aquatic weed species using images from drones. Expected benefits include decreasing time that would have been spent manually reviewing the images. | The model outputs the aquatic weed species contained in the image. | The model outputs the aquatic weed species contained in the image. | |||||||||||||||||||||||||
| Department Of Agriculture | Plant Protection and Quarantine | USDA-008 | Automated Detection & Mapping of Host Plants from Ground Level Imagery | Stage 5 - Retired (Use case has been retired or is in the process of being retired) | Science & Space | Retired | This system generates maps of specific tree species from ground-level (streetview) images. Expected benefits are decreased time and cost associated with manual collection of the data. | The model outputs GPS coordinates of flagged locations. | The model outputs GPS coordinates of flagged locations. | |||||||||||||||||||||||||
| Department Of Agriculture | Strategic Planning and Business Services Division | USDA-009 | Democratizing Data | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | This system scans collections of published documents to find how publicly-funded data and evidence are used to serve science and society. This helps the National Agricultural Statistics Service and the Economic Research Service understand who is using their data and why. This improves customer service, helps evaluate programs, and answers important questions for planning and learning. | The model outputs text containing the identified dataset reference information. | 03/08/2021 | Developed with contracting resources | The model outputs text containing the identified dataset reference information. | None; | ||||||||||||||||||||
| Department Of Agriculture | Geospatial Enterprise Operations | USDA-011 | Land Change Analysis Tool (LCAT) | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Mission-Enabling (Internal Agency Support) | Pilot | No | Not high-impact | The Land Change Analysis Tool (LCAT) creates high resolution maps to help make land use decisions. For example, it has been used to monitor eastern redcedar for about 40 years in South Dakota and to support wildlife hazard assessments at airports with various organizations. This tool reduced the labor hours needed by the Farm Service Agency (FSA) to review land data accuracy in Georgia by 100 times. | The model outputs land cover maps. | 10/01/2018 | Developed in-house | The model outputs land cover maps. | None; | ||||||||||||||||||||
| Department Of Agriculture | Office of the Chief Data Officer | USDA-012 | OCIO/CDO Council Comment Analysis Tool | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | This prototype helps reviewers identify the main topics and themes of comments, and then group similar comments together. This makes the comment review process more efficient by providing new insights and speeding up comment processing. Benefits include reducing repeated development efforts across the government and saving costs. | The model outputs groups of comments categorized by topic and similarity. | 12/01/2020 | Developed with contracting resources | The model outputs groups of comments categorized by topic and similarity. | None; | ||||||||||||||||||||
| Department Of Agriculture | Office of Retailer Operations & Compliance | USDA-013 | Retailer Receipt Analysis | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Government Services (includes Benefits and Service Delivery) | Pre-deployment | No | Not high-impact | This system uses optical character recognition (OCR) to convert physical inventory documentation into digital text. This makes the review of inventory documents more efficient and consistent. | The model outputs digital text of inventory documentation and distinguishes food items and categories. | 10/01/2021 | Developed with contracting resources | The model outputs digital text of inventory documentation and distinguishes food items and categories. | None; | ||||||||||||||||||||
| Department Of Agriculture | Research and Development | USDA-014 | Ecosystem Management Decision Support System (EMDS) | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Energy & the Environment | Deployed | No | Not high-impact | This system provides decision support for environmental analysis and planning by using AI-powered tools in ArcGIS and QGIS. Use of this system empowers stakeholders to make more informed and effective decisions about natural resource management. | Outputs from this system include the identification of landscapes in need of management/maintenance, along with suggested management actions based on considerations such as cost, efficacy, and policy. | 01/01/1994 | Developed with contracting resources | Outputs from this system include the identification of landscapes in need of management/maintenance, along with suggested management actions based on considerations such as cost, efficacy, and policy. | None; | ||||||||||||||||||||
| Department Of Agriculture | Northern Research Station | USDA-016 | Cross-Laminated Timber (CLT) Knowledge Database | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Energy & the Environment | Deployed | No | Not high-impact | This system enables researchers, practitioners, and the public to find specialized information about timber products. Benefits include faster information sharing and less time spent on manual searches. | System outputs are webpage links from the timber knowledge database. | 12/01/2017 | Developed with contracting resources | System outputs are webpage links from the timber knowledge database. | None; | ||||||||||||||||||||
| Department Of Agriculture | Rocky Mountain Research Station | USDA-017 | Raster Tools | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Science & Space | Pilot | No | Not high-impact | This system will make machine learning techniques available for geospatial applications. Benefits include standardization of methods, improved work quality, and increased user productivity. | The system API (Application Programming Interface) provides various AI outputs, usually in the form of raster images and data tables. | 08/01/2021 | Developed in-house | The system API (Application Programming Interface) provides various AI outputs, usually in the form of raster images and data tables. | None; | ||||||||||||||||||||
| Department Of Agriculture | Rocky Mountain Research Station, Missoula Fire Sciences Lab | USDA-018 | TreeMap and FuelMap (all versions) | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Energy & the Environment | Deployed | No | Not high-impact | TreeMap provides a detailed model of the forests in the US. It is used for measuring carbon, planning fuel treatments, starting landscape vegetation models, assessing fire effects, and more. Users include the US Forest Service, private companies, and state governments. | TreeMap produces a detailed map of a plot of forest and a database table listing individual tree records or fuel characteristics for each plot. | 01/01/2010 | Developed in-house | TreeMap produces a detailed map of a plot of forest and a database table listing individual tree records or fuel characteristics for each plot. | None; | ||||||||||||||||||||
| Department Of Agriculture | Geospatial Office | USDA-019 | Landscape Change Monitoring System (LCMS) | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Science & Space | Deployed | No | Not high-impact | This project monitors large areas for changes in land cover and land use over time. The benefits include creating a consistent method for tracking changes in the landscape. | The model outputs predictions of vegetation gain, vegetation loss, land cover, and land uses. | Developed in-house | The model outputs predictions of vegetation gain, vegetation loss, land cover, and land uses. | None; | |||||||||||||||||||||
| Department Of Agriculture | Geospatial Office | USDA-021 | Forest Health Detection Monitoring | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Energy & the Environment | Pilot | No | Not high-impact | This project monitors forest health by detecting tree damage through changes in light patterns collected by satellites. This detection method helps the Forest Health Protection program monitor areas that can't be checked on the ground or with aerial surveys. | The model outputs the stage of forest health based on the image, along with a map (polygons) of the area for monitoring. | Developed with both contracting and in-house resources | The model outputs the stage of forest health based on the image, along with a map (polygons) of the area for monitoring. | None; | |||||||||||||||||||||
| Department Of Agriculture | Research and Development Division | USDA-022 | Cropland Data Layer | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Agricultural Statistics | Deployed | No | Not high-impact | This project produces supplemental estimates of crop acreage and releases geospatial data products to the user community. | The system outputs are an acreage estimate and agrigulture-specific land cover product. | 01/01/2008 | Developed in-house | The system outputs are an acreage estimate and agrigulture-specific land cover product. | None; | ||||||||||||||||||||
| Department Of Agriculture | Frames Maintenance | USDA-023 | List Frame Deadwood Identification | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Agricultural Statistics | Deployed | No | Not high-impact | This model helps identify farms that may be out of business on the National Agricultural Statistics Service list. Parts of the model were used to create clear rules to identify these farms. The resulting list is more accurate and allows for smaller sample sizes, reducing the burden on respondents. | The output of the model was a probability score that a farm is out of business. | 02/04/2014 | Developed in-house | The output of the model was a probability score that a farm is out of business. | Age; | ||||||||||||||||||||
| Department Of Agriculture | Planning, Accountability and Reporting Staff and Institute of Bioenergy, Climate and Environment | USDA-024 | Climate Change Classification NLP | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | The Climate Change Classification Natural Language Processing (NLP) model identifies likely climate-related projects within National Institute of Food and Agriculture's (NIFA) large and diverse funding portfolio. Expected benefits include reduced labor hours for reporting and increased repeatability and accuracy of reporting. | Model output is a list of climate change projects classified as "climate change related" or "not climate change related" for National Institute of Food and Agriculture (NIFA) internal project review/adjudication and reporting. | 07/01/2021 | Developed with both contracting and in-house resources | Model output is a list of climate change projects classified as "climate change related" or "not climate change related" for National Institute of Food and Agriculture (NIFA) internal project review/adjudication and reporting. | None; | ||||||||||||||||||||
| Department Of Agriculture | Facility Protection Division (FPD) | USDA-025 | Video Surveillance System | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | The purpose of this system is to conduct facial recognition video surveillance to provide enhanced security. Benefits include reduced labor hours for technicians and augmented surveillance capability. | The system outputs a positive match to the security control center, indicating identification of the selected individual. An alarm notification is sent to alert security personnel. | Developed with both contracting and in-house resources | The system outputs a positive match to the security control center, indicating identification of the selected individual. An alarm notification is sent to alert security personnel. | Sex/Gender; Race/Ethnicity; | |||||||||||||||||||||
| Department Of Agriculture | Office of the Chief Information Officer | USDA-026 | Aquisition Approval Request Compliance Tool | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | This project was developed to help identify likely Information Technology (IT) purchases that do not have an associated Acquisition Approval Request. The benefits are reducing unauthorized IT purchases and increasing compliance with IT procurement procedures and approvals. | The output is a score indicating how likely it is that the purchase is an Information Technology (IT) purchase. | Developed with both contracting and in-house resources | The output is a score indicating how likely it is that the purchase is an Information Technology (IT) purchase. | None; | |||||||||||||||||||||
| Department Of Agriculture | National Water and Climate Center | USDA-027 | Operational Water Supply Forecasting for Western US Rivers | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | Yes | High-impact | The National Water and Climate Center has a multi-model machine-learning metasystem (M4) for generating water supply forecasts. This model uses AI and other data-science technologies to reduce forecast errors, helping stakeholders make better decisions about water supply availability. | The model outputs water supply forecasts. | 12/01/2019 | Developed in-house | The model outputs water supply forecasts. | None; | ||||||||||||||||||||
| Department Of Agriculture | Plant Protection and Quarantine | USDA-028 | Standardization of Cut Flower Business Names for Message Set Data | Stage 5 - Retired (Use case has been retired or is in the process of being retired) | Mission-Enabling (Internal Agency Support) | Retired | Natural Language Processing (NLP) is used to match names of producers to varietal information for cut flowers, which will help convert from manual to automated inspection systems. The main benefit of automation is that it can manage thousands of entities, which would be impossible to handle manually. | The model outputs before and after lists of producer names and cut flower varieties. | The model outputs before and after lists of producer names and cut flower varieties. | |||||||||||||||||||||||||
| Department Of Agriculture | Digital Infrastructure Services Center | USDA-029 | Intelligent Ticket Routing | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | Help desk tickets are often sent to the wrong group and must be manually re-routed to the correct group, which can take time, resources, and may delay issue resolution. The Intelligent Ticket Routing system helps to send the ticket to the correct group, increasing customer satisfaction by reducing the number of times a customer is transferred or placed on hold, and decreasing the average handle time (AHT). In our specific use case, we reduce the time taken to route a ticket to the appropriate group, shortening the time required to resolve an issue. | The system outputs a prediction of the appropriate group for ticket management. | 01/01/2022 | Developed with both contracting and in-house resources | The system outputs a prediction of the appropriate group for ticket management. | None; | ||||||||||||||||||||
| Department Of Agriculture | Digital Infrastructure Services Center | USDA-030 | Predictive Maintenance Impacts | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | A natural language processing (NLP) model classifies whether infrastructure maintenance changes will or will not cause an incident at the Digital Infrastructure Services Center (DISC). Using this system, the business can improve the review process or address specific needs within groups. This will lead to process improvements, increased productivity, higher performance and job satisfaction, higher client satisfaction, and better achievement of key performance indicators (KPIs). | The model outputs a score between 0 to 1. Closer to 1 indicates higher likelihood of an incident created by the proposed change. | 03/01/2020 | Developed with both contracting and in-house resources | The model outputs a score between 0 to 1. Closer to 1 indicates higher likelihood of an incident created by the proposed change. | None; | ||||||||||||||||||||
| Department Of Agriculture | Center for Civil Rights Operations; Data Records and Management Division | USDA-031 | Artificial Intelligence SPAM Mitigation Project | Stage 5 - Retired (Use case has been retired or is in the process of being retired) | Government Services (includes Benefits and Service Delivery) | Retired | An AI/ML model automatically classifies and removes spam and marketing emails from civil rights complaints email channels. Benefits include reducing the time spent manually managing email channels, decreasing the memory burden on email systems, and lowering the risk from malicious emails. | The model outputs a classification of received emails, flagging spam, marketing, and phishing emails. | The model outputs a classification of received emails, flagging spam, marketing, and phishing emails. | |||||||||||||||||||||||||
| Department Of Agriculture | Plant Protection and Quarantine | USDA-032 | Approximate String Matching (aka fuzzy matching) to Standardize Data | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | A model is used to replace typos in Plant Protection and Quarantine (PPQ) program data using a list of standardized producer and commodity names. This results in clean, standardized data through an automated workflow. Benefits include reducing labor hours compared to manual data cleaning, makes near-real-time reporting possible, and accurate data enables program managers to conduct efficient policy enforcement and program monitoring. | The model outputs corrected text data. | 02/01/2023 | Developed in-house | The model outputs corrected text data. | None; | ||||||||||||||||||||
| Department Of Agriculture | Plant Protection and Quarantine | USDA-033 | Automated PDF Document Processing and Information Extraction | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | This use case takes program and workforce related information stored in thousands of PDFs and converts the information into data tables that can be used for analytics and dashboards. This makes information that is difficult to find available in real-time to support decision making and saves large amounts of time compared to previous methods used. | The model outputs structured database tables. | Developed in-house | The model outputs structured database tables. | None; | |||||||||||||||||||||
| Department Of Agriculture | Research and Development Division | USDA-035 | Census Propensity Scores via ML | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Agricultural Statistics | Deployed | No | Not high-impact | This model predicts how likely individuals or operations are to complete the Census of Agriculture. The predictions can help data collectors decide where they need to focus their efforts in order to get more complete census responses. | The model outputs a probability score (all values from and including 0 to 1). | 10/01/2022 | Developed in-house | The model outputs a probability score (all values from and including 0 to 1). | Zipcode; | ||||||||||||||||||||
| Department Of Agriculture | Soil Science and Resource Assessment | USDA-036 | Ecological Site Descriptions (Machine Learning) | Stage 5 - Retired (Use case has been retired or is in the process of being retired) | Mission-Enabling (Internal Agency Support) | Retired | This AI/ML work conducts analysis of over 20 million records of soils data and 20,000 text documents of ecological information in order to provide complete soil based ecological information for the country. Benefits include reduction in labor hours manually analyzing documents, and enabling stakeholders to examine records in ways previously not thought of to make more informed decisions. | The AI model outputs ecological soil classifications and mappings. | The AI model outputs ecological soil classifications and mappings. | |||||||||||||||||||||||||
| Department Of Agriculture | Resource Inventory and Assessment Division | USDA-037 | Conservation Effects Assessment Project | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | The purpose of the use case is to predict the conservation effects of cropland practices in real time, with no technical skill required. Such models would allow field conservation planners to have real-time conservation effects on sediment and nutrients. | The model outputs predictions of sediment and nutrient change values based on conservation methods. | 11/01/2021 | Developed in-house | The model outputs predictions of sediment and nutrient change values based on conservation methods. | None; | ||||||||||||||||||||
| Department Of Agriculture | Resource Inventory and Assessment Division | USDA-038 | Digital Imagery (no-change) for NRI Program | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Mission-Enabling (Internal Agency Support) | Pre-deployment | AI algorithms are used to look at landscape images and detect if the landscape has changed from year to year. Currently, about 72000 aerial images are interpreted by dozens of technicians each year to collect data for the National Resources Inventory (NRI) program. This case would decrease the number of labor hours required of technicians to manually interpret images. | The model outputs a classification of “no-changes” in the images if the landscape in the images remain stable from year to year. | 10/01/2022 | The model outputs a classification of “no-changes” in the images if the landscape in the images remain stable from year to year. | ||||||||||||||||||||||||
| Department Of Agriculture | Regional Operations & Support; Mountain Plains Regional Office | USDA-039 | Nutrition Education & Local Access Dashboard | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Government Services (includes Benefits and Service Delivery) | Deployed | No | Not high-impact | The goal of this dashboard is to provide county-level information on nutrition education and local food access, alongside other metrics related to hunger and nutritional health. This interactive dashboard can provide specific details based on the properties of farm to school intensity and size, program activity intensity, ethnicity and race, fresh food access, school size, and program participation. These properties allow users to find similar states based on any of these characteristics, opening up opportunities for partnerships with states they may not have considered. Benefits include increasing stakeholder awareness and empowering more informed decision-making and collaboration. | The model outputs groups of similar counties/states based on the different combinations of properties available for states. | 11/09/2022 | Developed with both contracting and in-house resources | The model outputs groups of similar counties/states based on the different combinations of properties available for states. | Race/Ethnicity; | ||||||||||||||||||||
| Department Of Agriculture | Methods Division | USDA-040 | Survey Text Remarks Value Scoring | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | The purpose of this use case is to analyze a large amout of text in survey responses and score all comments with a priority value. The highly scored blocks of text then get prioritized for review by a human and are responded to more quickly than if they were to be retained in a queue. | The model outputs a value score on each snippit of text, highly scored snippits of text are placed at the front of the queue before lower scored blocks to capture text of value more quickly. | Developed in-house | The model outputs a value score on each snippit of text, highly scored snippits of text are placed at the front of the queue before lower scored blocks to capture text of value more quickly. | None; | |||||||||||||||||||||
| Department Of Agriculture | Methodology Division; Statistics Division; Regional Field Offices | USDA-041 | Survey Outlier Detection Model | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Agricultural Statistics | Deployed | No | Not high-impact | The purpose of this use case is to identify abnormal values to edit in surveys. This reduces manual labor and improves data quality. | The model outputs a recommendation of which values in a dataset should be changed. | 05/01/2022 | Developed in-house | The model outputs a recommendation of which values in a dataset should be changed. | None; | ||||||||||||||||||||
| Department Of Agriculture | Office of the Chief Technology Officer | USDA-042 | Multilingual Translation of Recalls and Public Health Alerts | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Government Services (includes Benefits and Service Delivery) | Pre-deployment | No | Not high-impact | The purpose of this system is to expand the multilingual outreach of food safety information like recalls and public health alerts. Benefits include cost savings on vendor translation services, faster messaging circulation, and increased number of available languages to the general public. | The model outputs multilingual translations created from the original english text. | Developed with both contracting and in-house resources | The model outputs multilingual translations created from the original english text. | None; | |||||||||||||||||||||
| Department Of Agriculture | Office of Public Health Science | USDA-043 | Genomic Analyses of Pathogen Subtypes | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Science & Space | Pre-deployment | No | Not high-impact | The purpose of this use case is to use machine learning (ML) methods to group foodborne germs based on patterns in their genes, then connect this information with available health data to evaluate foodborne germ risk to public health. Expected benefits include improving our understanding of important foodborne germ genes, assessing key genes and new trends, and identifying and ranking germs that are important for public health. | The model outputs predictions of high risk foodborne germ subtypes, key genetic markers by importance, and emerging trends. | 08/01/2022 | Developed in-house | The model outputs predictions of high risk foodborne germ subtypes, key genetic markers by importance, and emerging trends. | None; | ||||||||||||||||||||
| Department Of Agriculture | Office of Public Health Science | USDA-044 | Foodborne Illness Source Attribution | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Science & Space | Pre-deployment | No | Not high-impact | The Interagency Food Safety Analytics Collaboration (IFSAC) - a partnership between the Centers for Disease Control and Prevention (CDC), the U.S. Food and Drug Administration (FDA), and the Food Safety and Inspection Service (FSIS) - has used computer-based methods to predict the likely sources of foodborne illnesses in humans caused by various germs (e.g., Salmonella, Campylobacter). Expected benefits include improving our understanding of where these germs come from and how they spread, which can help in creating measures and policies to prevent or reduce illnesses and the overall impact of these diseases. | The model outputs predictions of likely sources of foodborne human illness cases, along with a confidence score of how probable it is that the illness came from the predicted source. | 08/02/2021 | Developed in-house | The model outputs predictions of likely sources of foodborne human illness cases, along with a confidence score of how probable it is that the illness came from the predicted source. | None; | ||||||||||||||||||||
| Department Of Agriculture | MRPIT Data & Analytics Directorate | USDA-045 | Public Comments Analysis | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | The purpose of the model is to automate the analysis of comments from regulations.gov to help personnel in their review and response tasks. Benefits include a reduction in the number of labor hours needed for review and response. | The model outputs text analysis and categorization of the public comments. | 11/01/2023 | Developed in-house | The model outputs text analysis and categorization of the public comments. | None; | ||||||||||||||||||||
| Department Of Agriculture | Rangeland Management Research Unit (Las Cruces) | USDA-046 | Rangeland Analysis Platform | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Energy & the Environment | Deployed | No | Not high-impact | The Rangeland Analysis Platform (RAP) allows users to track changes in plant growth and coverage over time. By monitoring the condition of agricultural ecosystems and the impact of conservation efforts, it can guide conservation practices for wildlife habitats, carbon assessments, and tax assessments. | The system outputs estimated fractional plant cover and net primary productivity estimates. | 04/01/2022 | Developed in-house | The system outputs estimated fractional plant cover and net primary productivity estimates. | None; | ||||||||||||||||||||
| Department Of Agriculture | Research and Development Division; Methodology Division; Regional Field Offices | USDA-047 | Predictive Cropland Data Layer | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Agricultural Statistics | Deployed | No | Not high-impact | The purpose of this system is to predict crop rotations. Benefits include improving data quality of area-based surveys. | The system outputs predictions of the types of crops in specific locations within the Conterminous United States (CONUS). | 01/01/2021 | Developed in-house | The system outputs predictions of the types of crops in specific locations within the Conterminous United States (CONUS). | None; | ||||||||||||||||||||
| Department Of Agriculture | Oklahoma Natural Resources Conservation Service (NRCS) Watershed Program | USDA-048 | Dam Inspection Report Document Processing | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | The purpose of this AI is to pull out and organize data from thousands of dam inspection documents so that we can use Microsoft Power BI to understand the condition of thousands of USDA Watershed program dams. This allows us to identify the biggest issues and trends across our collection of over 2,100 dams in Oklahoma while reducing labor hours required to complete the task manually. | The model outputs text and checkbox responses, including dam metadata, inspection issue tracking (yes and no checkboxes), and further remarks on the issue or what has been/needs to be done on the dam. | 05/01/2023 | Developed in-house | The model outputs text and checkbox responses, including dam metadata, inspection issue tracking (yes and no checkboxes), and further remarks on the issue or what has been/needs to be done on the dam. | None; | ||||||||||||||||||||
| Department Of Agriculture | Information Services Division | USDA-049 | Portfolio Approval and Management (PAM) Bot | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Mission-Enabling (Internal Agency Support) | Pre-deployment | The purpose of the model is to improve the Economic Research Service (ERS) research approval process. The system reduces the time it takes to fill out information and seeking approval, improves information accuracy, and brings visibility to the approval status across various division functions. | The system provides three outputs: the approval status, summary recommendations, and generated citations. | 05/01/2024 | The system provides three outputs: the approval status, summary recommendations, and generated citations. | ||||||||||||||||||||||||
| Department Of Agriculture | Nebraska Natural Resources Conservation Service (NRCS) | USDA-050 | GIS Invasive Tree Extraction for Field Level Users | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Energy & the Environment | Deployed | No | Not high-impact | The puspose of the model is to estimate of the spread of invasive tree infestation, specifically Eastern redcedar. This helps to avoid poor or inaccurate estimates caused by time constraints and heavy workloads when manually collecting the data. | The model outputs polygons representing the extent of trees present in landscape. | Developed in-house | The model outputs polygons representing the extent of trees present in landscape. | None; | |||||||||||||||||||||
| Department Of Agriculture | Northern Research Station | USDA-051 | DISTRIB-II: Habitat Suitability of Eastern United States Tree | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Science & Space | Deployed | No | Not high-impact | The purpose and expected benefits of the Climate Change Atlas are to give forest resource managers, forest landowners, and the general public information on the current and potential future of habitats for various tree species in the eastern United States. This information can contribute to forest management decisions when considering how climate change may affect the trees currently present and how likely it is that other tree species not currently in an area might find new habitats under different climate change scenarios. | The system outputs predictions of how well a tree species can live in a certain habitat based on climate change scenarios. Maps, graphs, and reports are generated from the modeled geographic information systems (GIS) data. | 04/10/1998 | Developed in-house | The system outputs predictions of how well a tree species can live in a certain habitat based on climate change scenarios. Maps, graphs, and reports are generated from the modeled geographic information systems (GIS) data. | None; | ||||||||||||||||||||
| Department Of Agriculture | Assistant Chief Data Officers Team | USDA-052 | FSA FLP Chatbot | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | The purpose of this use case is to solve the problem of searching Loan handbooks to provide better customer service. The expected benefit is to help employees provide better service. A second benefit that is being explored is providing Veteran specific answers to services. | The expected output is text answers to prompt questions. | Developed in-house | The expected output is text answers to prompt questions. | Veteran; | |||||||||||||||||||||
| Department Of Agriculture | Insurance Services | USDA-053 | ROE Document Recognition - RoeDR | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | The purpose of this model is to analyze documents from producers and Authorized Insurance Providers (AIPs), pick the appropriate page from the documents, read the signature date and producer signature name, convert the date and name to text, and load it into an application. This feature saves us time from having to input the data manually. We can then use the data for reporting purposes. | The model outputs the producer signature and signature date within document as text. | Developed in-house | The model outputs the producer signature and signature date within document as text. | None; | |||||||||||||||||||||
| Department Of Agriculture | Biotechnology Regulatory Services | USDA-054 | IRIS | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Science & Space | Deployed | No | Not high-impact | The purpose of this system is to make literature searches more effective for Biotechnology Regulatory Services. This increases work efficiency with our regulatory tasks. | The model outputs recommended literature list for scientists. | 01/09/2023 | Developed with contracting resources | The model outputs recommended literature list for scientists. | None; | ||||||||||||||||||||
| Department Of Agriculture | Digital Infrastructure Services Center | USDA-055 | Ticket Resolution Categorization (Incident/Change) | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | The purpose of this model is to classify the resolution type and tier of all support desk tickets after they have been closed. This model allows the support team to spend more time identifying process inefficiencies and plan solutions rather than categorizing tickets. This will lead to process improvement, automation of repetitive tasks, increased productivity, and higher performance. | The model outputs the classification category of support ticket resolutions. | 06/01/2023 | Developed with both contracting and in-house resources | The model outputs the classification category of support ticket resolutions. | None; | ||||||||||||||||||||
| Department Of Agriculture | Digital Infrastructure Services Center | USDA-056 | Ticket Templatization | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | This model is a non-production exploratory model, meaning it does not make predictions but is used to explore trends within data to gain insights that can help in making data-driven decisions. It is designed to explore and analyze service and change requests submitted through the 105 general form or without templates. This model helps to identify subcategories within the larger dataset that could be candidates for standardization and automation, potentially leading to improved operational efficiency, cost savings, and customer satisfaction. | The model outputs trends within data to assist in decision making. | 01/01/2024 | Developed with contracting resources | The model outputs trends within data to assist in decision making. | None; | ||||||||||||||||||||
| Department Of Agriculture | North Dakota State Office | USDA-057 | File Rename Automation | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Mission-Enabling (Internal Agency Support) | Pilot | No | Not high-impact | The purpose of this tool is to rename thousands of documents converted from physical to digital records that were given a generic file name. This tool can grab text from page 1 of each document and apply a correct file rename instead of employees having to spend hours manually renaming documents. | The model outputs renamed files. | 11/06/2023 | Developed in-house | The model outputs renamed files. | None; | ||||||||||||||||||||
| Department Of Agriculture | Office of National Programs | USDA-058 | Rapid Drafting of ARS Research Summaries | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Mission-Enabling (Internal Agency Support) | Pre-deployment | The purpose of this tool is to summarize ongoing research from internal Agricultural Research Service (ARS) documents to allow program staff to quickly create accurate and timely summary documents, such as briefing papers, talking points for leadership, and speeches. This will give staff more time for other duties, and leadership will be able to confidently answer questions, justify budget requests, and ensure that our research is innovative and relevant. | The tool outputs talking points and short briefing papers. | The tool outputs talking points and short briefing papers. | |||||||||||||||||||||||||
| Department Of Agriculture | Soil and Plants Science Division; Soil Services and Information; Conservation Information Delivery | USDA-059 | DS Hub Geo-metadata generation | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Government Services (includes Benefits and Service Delivery) | Pre-deployment | The purpose of this AI use case is to generate metadata for Natural Resources Conservation Service (NRCS) datasets, ensuring consistency, accessibility, and compliance through generative AI. Expected benefits include increased data accessibility, reduced manual workload, minimized errors, better dataset understanding, and fast data retrieval for stakeholders. | The model outputs accurate, consistent, and compliant metadata appropriate for the existing geospatial data that it analyzed. | The model outputs accurate, consistent, and compliant metadata appropriate for the existing geospatial data that it analyzed. | |||||||||||||||||||||||||
| Department Of Agriculture | Soil and Plants Science Division; Soil Services and Information; Conservation Information Delivery | USDA-060 | Dynamic Soils Hub | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Science & Space | Pilot | No | Not high-impact | The Dynamic Soils Hub (DS Hub) under the Natural Resources Conservation Service (NRCS) is a tool designed to help both government workers and the public understand and analyze soil information. The DS Hub links different soil and conservation databases, making it easier to evaluate the environmental benefits of conservation programs by accessing previously separate data and models. This enhances the USDA’s ability to study and report on how soil properties change with conservation efforts over time. | The system outputs the class of soil based on the supplied soil information. | 11/11/2020 | Developed with contracting resources | The system outputs the class of soil based on the supplied soil information. | None; | ||||||||||||||||||||
| Department Of Agriculture | Deputy Administrator for Compliance; Business Analytics Division | USDA-061 | Cover Crop Mapping | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Law & Justice | Pre-deployment | No | Not high-impact | This project aims to annually map fall and spring cover crop practices on farms in the U.S. Midwest. These maps are made using satellite images and models of plant growth. This data helps the agency independently find out the extent of cover crop practices. | The output is a state-level map of detected cover crops by year, classified by planting date (fall, spring). | 09/02/2022 | Developed with both contracting and in-house resources | The output is a state-level map of detected cover crops by year, classified by planting date (fall, spring). | None; | ||||||||||||||||||||
| Department Of Agriculture | Deputy Administrator for Compliance; Business Analytics Division | USDA-062 | Planting Date Detection | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Law & Justice | Pre-deployment | No | Not high-impact | This project aims to find out the planting dates for corn, soybean, and winter wheat on farms in the U.S. Midwest. Maps containing planting dates are made using satellite images and models of plant growth. This data helps the agency independently verify reported planting dates on farm fields supporting efforts to ensure the integrity of their programs. | The output is an annual map of planting dates for corn, soybean, and winter wheat for crop years 2016-2023. | 09/07/2022 | Developed with both contracting and in-house resources | The output is an annual map of planting dates for corn, soybean, and winter wheat for crop years 2016-2023. | None; | ||||||||||||||||||||
| Department Of Agriculture | Deputy Administrator for Compliance; Business Analytics Division | USDA-063 | Acreage and Crop Type Validation | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Law & Justice | Pre-deployment | No | Not high-impact | This project uses satellite images and plant growth models to generate a farm field size estimate and the crop type on farms in the U.S. Midwest. This data helps the agency independently find out the accuracy of reported field sizes and crop types, supporting efforts to ensure the integrity of their programs. | The output is a validation of reported acreage and validation of reported crop type for corn, soybean, and winter wheat on farm fields. | 09/01/2022 | Developed with both contracting and in-house resources | The output is a validation of reported acreage and validation of reported crop type for corn, soybean, and winter wheat on farm fields. | None; | ||||||||||||||||||||
| Department Of Agriculture | Veterinary Services | USDA-064 | U.S. Poultry Operations and Populations Dataset | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Emergency Management | Deployed | No | Not high-impact | The purpose of this case is to develop a dataset that addresses the problem of not having complete information about where poultry farms are located and how many birds they have. Filling this gap provides detailed data on poultry farm locations and populations, which is essential for planning animal health emergencies and predicting the spread of diseases. | Output is a national-level dataset of domestic poultry operations and estimated populations. | Developed in-house | Output is a national-level dataset of domestic poultry operations and estimated populations. | None; | |||||||||||||||||||||
| Department Of Agriculture | Veterinary Services | USDA-065 | Equine Operations and Populations Dataset for the U.S. | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Emergency Management | Pre-deployment | No | Not high-impact | The purpose of this case is to develop a dataset that addresses the problem of not having complete information about where horse farms are located and how many horses they have. Filling this gap provides detailed data on horse farm locations and populations, which is essential for planning emergencies and predicting the spread of diseases. | Output is a national-level dataset of domestic horse operations and estimated populations. | 01/02/2023 | Developed with both contracting and in-house resources | Output is a national-level dataset of domestic horse operations and estimated populations. | None; | ||||||||||||||||||||
| Department Of Agriculture | National Agricultural Statistics Service | USDA-066 | NASS - Naggle 2.0 Automated Editing Tool | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Agricultural Statistics | Pre-deployment | The purpose of this model is to determine if an answer on a survey is valid or invalid. If an answer is classified as invalid, a regression model will then suggest a corrected value. This approach will help reduce errors and improve the accuracy of survey forms, saving time and reducing the number of labor hours spent on editing. | The classification model outputs an excel sheet with the survey, person, variable, and whether the variable is valid or invalid. The regression model outputs an excel sheet containing the invalid records, which includes the survey, person, variable, original value, and new predicted value. | 06/03/2024 | The classification model outputs an excel sheet with the survey, person, variable, and whether the variable is valid or invalid. The regression model outputs an excel sheet containing the invalid records, which includes the survey, person, variable, original value, and new predicted value. | ||||||||||||||||||||||||
| Department Of Agriculture | Research and Development Division | USDA-067 | County-level remotely-sensed corn and soybean yield estimation | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Agricultural Statistics | Deployed | No | Not high-impact | The purpose of this tool is to estimate yearly corn and soybean yields for each county using satellite images. More details can be found in the paper titled “An assessment of pre- and within-season remotely sensed variables for forecasting corn and soybean yields in the United States” (https://doi.org/10.1016/j.rse.2013.10.027). Benefits of providing county-level crop yield statistics allow stakeholders to make more informed planning and decisions. | The model outputs county-level crop yield estimates for corn and soybeans in the amount of bushels per acre. | 01/01/2007 | Developed in-house | The model outputs county-level crop yield estimates for corn and soybeans in the amount of bushels per acre. | None; | ||||||||||||||||||||
| Department Of Agriculture | Strategic Planning and Business Services Division | USDA-068 | NASSportal Intranet Bot | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Mission-Enabling (Internal Agency Support) | Pre-deployment | This model will assist National Agricultural Statistics Service (NASS) staff in finding answers to questions on how to administer programs. This will decrease labor hours and increase efficiency of NASS staff in program administration. | The chatbot will provide text outputs. | 07/01/2024 | The chatbot will provide text outputs. | ||||||||||||||||||||||||
| Department Of Agriculture | Research and Development; Forest Products Laboratory | USDA-069 | XyloTron/XyloPhone Wood Identification System | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Law & Justice | Pilot | No | Not high-impact | The purpose of these tools is to identify different types of wood based on their cross-section. These tools will help industries follow laws and support law enforcement in meeting national (e.g. Lacey Act) and international (e.g. CITES) regulations. | The tools will output a prediction of the type wood. | 01/01/2016 | Developed in-house | The tools will output a prediction of the type wood. | None; | ||||||||||||||||||||
| Department Of Agriculture | Procurement and Property Services | USDA-070 | Incident Invoice Document Understanding | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | The purpose of this tool is to analyze incident invoices and return the values that need to be entered into a database. This new approach leads to faster invoice processing and reduces data entry mistakes for more accurate data. | The tool outputs an Excel document containing required values identified from incident invoices. | Developed with contracting resources | The tool outputs an Excel document containing required values identified from incident invoices. | None; | |||||||||||||||||||||
| Department Of Agriculture | Northern Research Station | USDA-071 | Forest disease detection and screening | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Science & Space | Pre-deployment | No | Not high-impact | The purpose of this project is to improve tree disease diagnosis and screening, thereby facilitating ongoing efforts within and outside the Forest Service to manage diseases of forest trees. | The model will output a prediction indicating whether a tree is diseased or not, and if a tree is resistant or susceptible to a disease. | 08/03/2020 | Developed in-house | The model will output a prediction indicating whether a tree is diseased or not, and if a tree is resistant or susceptible to a disease. | None; | ||||||||||||||||||||
| Department Of Agriculture | Northern Research Station | USDA-072 | Use of LLMs for data extraction | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Mission-Enabling (Internal Agency Support) | Pre-deployment | The purpose of this model is to quickly gather information from scientific papers to track plant diseases. The practical benefit is that using this method would save time compared to manually collecting the information, which can be slow and error-prone when done for a long time. | The model outputs a table of requested data variables (e.g., country, pathogen name, host name, etc.). | 10/01/2023 | The model outputs a table of requested data variables (e.g., country, pathogen name, host name, etc.). | ||||||||||||||||||||||||
| Department Of Agriculture | Business Operations/Chief Data Office | USDA-073 | IPWG Application Survey Analysis | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | The purpose of this tool is to analyze over 5,000 responses from an internal employee survey on IT applications, then give a summary of the employee feedback regarding each IT application. This decreases the time required to go through each response manually, helping the team make informed investment decisions more quickly. | The model outputs a text summarization for each IT application in the survey data, and potentially includes text summaries of responses and sentiment analysis. The project will also produce a dashboard that allows users to see similar attributes at the agency, office, application, and individual response level. | 10/03/2024 | Developed in-house | The model outputs a text summarization for each IT application in the survey data, and potentially includes text summaries of responses and sentiment analysis. The project will also produce a dashboard that allows users to see similar attributes at the agency, office, application, and individual response level. | None; | ||||||||||||||||||||
| Department Of Agriculture | Rocky Mountain Research Station | USDA-074 | Fire Resilent Landscapes | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Science & Space | Pilot | No | Not high-impact | The goal of this tool is to quantify the cost of forest treatments. Benefits include providing the ability to accurately map treatment costs for users to make more informed decisions. | The tool outputs predictions in the form of raster surfaces/maps. | 08/01/2021 | Developed in-house | The tool outputs predictions in the form of raster surfaces/maps. | None; | ||||||||||||||||||||
| Department Of Agriculture | Rocky Mountain Research Station | USDA-075 | PC Rasterize | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Science & Space | Pilot | No | Not high-impact | The purpose of this tool is to be able to process point cloud data more efficiently. This will reduce costs associated with processing point cloud data. | The tool outputs point clouds and raster surfaces/maps. | 08/01/2024 | Developed in-house | The tool outputs point clouds and raster surfaces/maps. | None; | ||||||||||||||||||||
| Department Of Agriculture | Rocky Mountain Research Station | USDA-076 | Spread and Balance Sample Design | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Science & Space | Pilot | No | Not high-impact | The purpose of this tool is to produce samples that are well spread and balanced. This sample design will reduce the quantity of samples needed and further reduce costs associated with collecting field data. | The tool outputs data frames and geospatial-data-frames. | 05/01/2024 | Developed in-house | The tool outputs data frames and geospatial-data-frames. | None; | ||||||||||||||||||||
| Department Of Agriculture | Rocky Mountain Research Station | USDA-077 | Regression, Classification, Clustering with Hilbert Curves | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Science & Space | Pilot | No | Not high-impact | The purpose of this tool is to perform better regression, classification, and clustering. This will create a new and better ways to produce various estimates, reducing cost and error. | The tools will output Data frame and Raster Surfaces/maps. | 06/01/2024 | Developed in-house | The tools will output Data frame and Raster Surfaces/maps. | None; | ||||||||||||||||||||
| Department Of Agriculture | Research and Development | USDA-079 | The Big Data, Mapping, and Analytics Platform (BIGMAP) Project | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Science & Space | Deployed | No | Not high-impact | The purpose of this project is to use geospatial predictions from Forest Inventory and Analysis samples to make more accurate estimates of different forest characteristics. Greater precision in estimates leads to more informed decisions about the forest resources in the US. | The model outputs predictions in the form of raster maps. | 01/01/2019 | Developed with both contracting and in-house resources | The model outputs predictions in the form of raster maps. | None; | ||||||||||||||||||||
| Department Of Agriculture | Research and Development | USDA-080 | BirdNET to detect bird vocalizations for research and species monitoring | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Science & Space | Deployed | No | Not high-impact | BirdNET quickly scans thousands of hours of forest audio recordings to detect bird calls from species that are important for forest monitoring, like spotted owls, black-backed woodpeckers, and willow flycatchers. This decreases the time and cost associated with manually listening to recordings to identify bird calls. | The model outputs text files of bird calls, which include the bird species and time that the call was recorded. | 06/01/2021 | Developed in-house | The model outputs text files of bird calls, which include the bird species and time that the call was recorded. | None; | ||||||||||||||||||||
| Department Of Agriculture | Forest Inventory and Analysis; Southern Research Station | USDA-081 | Hurricane impact descriptions | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Mission-Enabling (Internal Agency Support) | Pre-deployment | No | Not high-impact | The purpose of the AI model is to convert a table of data about a tropical cyclone's path and estimated impact on forests into a clear and understandable story. This is part of a rapid assessment given to stakeholders after a cyclone hits, so it needs to be done fast. We are creating a tool to automate this process and the AI helps to make better quality reports. | The model outputs a few paragraphs of easy-to-read text that explains the effects of a cyclone. | 07/01/2024 | Developed in-house | The model outputs a few paragraphs of easy-to-read text that explains the effects of a cyclone. | None; | ||||||||||||||||||||
| Department Of Agriculture | Southern Research Station-4353 | USDA-082 | Predictive flood modeling | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Transportation | Pre-deployment | The purpose of this tool is to predict water flow during floods and assesses the vulnerability of drains under roads. This will help U.S. Department of Transportation (USDOT) and USDA Forest Service to make informed decisions in drain restoration and protection against flooding. | The model outputs water flow predictions during flood events and the vulnerability level of drains under roads. | 10/01/2024 | The model outputs water flow predictions during flood events and the vulnerability level of drains under roads. | ||||||||||||||||||||||||
| Department Of Agriculture | Rocky Mountain Research Station | USDA-083 | FuelCast | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Emergency Management | Deployed | No | Not high-impact | This project predicts future fuel conditions and gives early warnings to help plan fuel management. The benefits include better preparation of the US firefighting teams for potential increases in large wildfires. This system also reduces the workload for fire behavior analysts because it provides fuel estimates, so they don't have to spend as much time figuring out fire behavior patterns through trial and error. | The model outputs predictions of the future quantity of wood and plants that could be present and contribute to wildfires. | Developed with both contracting and in-house resources | The model outputs predictions of the future quantity of wood and plants that could be present and contribute to wildfires. | None; | |||||||||||||||||||||
| Department Of Agriculture | Northern Region (R1) | USDA-084 | R1 Forest Vegetation Modeling | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Energy & the Environment | Pre-deployment | The purpose of this tool is to use satellite images and methods such as LiDAR (light detection and ranging) with machine learning to model forest vegetation and make estimates. The use of machine learning improves models and estimates with decreased time and cost. | The model outputs predictions of forests and vegetation in the form of raster and vector geospatial maps. | 01/01/2024 | The model outputs predictions of forests and vegetation in the form of raster and vector geospatial maps. | ||||||||||||||||||||||||
| Department Of Agriculture | Geospatial Office | USDA-085 | ESRI Support Chat Bot | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Science & Space | Pre-deployment | The purpose of this chatbot is to help handle requests for support with geospatial software between our team and the software vendor. The benefits include saving time when dealing with Environmental Systems Research Institute (ESRI) support issues and reducing the number of specific ESRI support tickets that need to be sent to the Forest Service Geospatial Helpdesk or to ESRI through contract support services. | The chatbot outputs support ticket entries, code snippets for queries, and text and links for support ideas and answers. | The chatbot outputs support ticket entries, code snippets for queries, and text and links for support ideas and answers. | |||||||||||||||||||||||||
| Department Of Agriculture | Southern Research Station | USDA-086 | Wildlife deterrent system | Stage 3 - Implementation (Use case is currently undergoing functionality and security testing) | Science & Space | Pilot | No | Not high-impact | The purpose of the AI device is to keep coyotes out of a fenced area by blocking their entry through a gap in the fence, while still allowing other wildlife to pass through. The benefits include making ecological research possible that couldn't be done otherwise, and saving time by reducing the need to watch camera footage and manually control the fence. | The model performs video object detection of a coyotes, and arms an electrical barrier to prevent passage of the coyote. | Developed with contracting resources | The model performs video object detection of a coyotes, and arms an electrical barrier to prevent passage of the coyote. | None; | |||||||||||||||||||||
| Department Of Agriculture | Pacific Southwest Research Station | USDA-087 | The Lost Meadows Model | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Energy & the Environment | Deployed | No | Not high-impact | The purpose of this model is to find out where meadows used to be and how often they appeared in order to understand their original state and their potential for restoration. The discovery of these areas increases the potential for meadow restoration, which can benefit biodiversity, wildfire management, carbon storage, and water storage. | The model outputs predictions of areas with meadow-like environmental conditions. The predicted areas include a mixture of existing but undocumented meadows, non-meadow lands that may have once been meadows, and meadow-like areas that may never have been a meadow. | 10/10/2022 | Developed in-house | The model outputs predictions of areas with meadow-like environmental conditions. The predicted areas include a mixture of existing but undocumented meadows, non-meadow lands that may have once been meadows, and meadow-like areas that may never have been a meadow. | None; | ||||||||||||||||||||
| Department Of Agriculture | Pacific Southwest Research Station | USDA-088 | Markov random fields for mixed forests | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Science & Space | Pre-deployment | No | Not high-impact | The purpose of this tool is to improve the accuracy of estimates in machine learning models. The benefits include helping stakeholders make more informed and effective decisions for managing mixed forests. | The model outputs predicted counts of tree species in a location, and the degree of competition between different tree species in the same location. | 10/01/2022 | Developed in-house | The model outputs predicted counts of tree species in a location, and the degree of competition between different tree species in the same location. | None; | ||||||||||||||||||||
| Department Of Agriculture | Pacific Northwest Research Station | USDA-089 | AI for regional forest mapping and monitoring | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Energy & the Environment | Deployed | No | Not high-impact | The purpose of this model is to use existing satellite images and forest survey data from the USDA Forest Service to create detailed maps of forest structures. This information will help land managers be more effective and efficient with their planning. | The model outputs GeoTiffs (raster maps of forest attributes, such as tree density and tree species data). | 01/01/2000 | Developed with contracting resources | The model outputs GeoTiffs (raster maps of forest attributes, such as tree density and tree species data). | None; | ||||||||||||||||||||
| Department Of Agriculture | WO Research and Development | USDA-090 | IOL Focus Group and Survey Sensemaking | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Education & Workforce | Pre-deployment | The purpose of this model is to efficiently process large quantities of focus group transcripts and survey results. Benefits include decreased labor hours manually processing transcripts and surveys. | The model outputs text summaries of focus group comments and surveys. | 06/11/2024 | The model outputs text summaries of focus group comments and surveys. | ||||||||||||||||||||||||
| Department Of Agriculture | Geographic Information System (GIS) Stakeholder Community - all deputy areas | USDA-091 | Esri ArcGIS Pro Deep Learning Modules | Stage 4 - Operation and Maintenance (Use case is integrated into agency operations, and is being monitored for performance) | Mission-Enabling (Internal Agency Support) | Deployed | No | Not high-impact | The purpose of this tool is to enhance scientific modeling and analysis, which will standardize Geographic Information System (GIS) workflows for modeling and analytics. | The tools will output image classifications. | 04/01/2024 | Developed in-house | The tools will output image classifications. | None; | ||||||||||||||||||||
| Department Of Agriculture | National Forest System; Ecosystem Management and Coordination | USDA-092 | EMC Comment Parsing and Analysis | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Government Services (includes Benefits and Service Delivery) | Pre-deployment | This project aims to extract, categorize, and respond to public comments based on past responses. The benefits include creating a standardized process for handling comments, making public comment data more accessible and ready for AI use, reducing the time and cost of processing comments, minimizing human errors due to high workloads and tight deadlines, improving responsiveness to public concerns, increasing public trust, enhancing accountability through clear reporting, and supporting team training by building a database of common themes and response strategies. | The model will output text analyses of categories pulled from public comments and recommend responses based on historic responses. | 03/01/2024 | The model will output text analyses of categories pulled from public comments and recommend responses based on historic responses. | ||||||||||||||||||||||||
| Department Of Agriculture | Northern Research Station | USDA-093 | QUIC-Fire processing and analysis | Stage 1 - Initiation (The use case's intended purpose and high-level requirements are documented) | Science & Space | Pre-deployment | AI is being used to analyze data from fire-atmosphere models to understand fire behavior and effects. The goal is to create tools that will help fire and smoke managers use the QUIC-Fire (Quick Urban & Industrial Complex-Fire) model for planning controlled burns and assessing wildfire behavior. | The AI output will be a collection of metrics that provide building blocks for a tool that fire and smoke managers will use to implement QUIC-Fire (Quick Urban & Industrial Complex - Fire) into their decision-making. | The AI output will be a collection of metrics that provide building blocks for a tool that fire and smoke managers will use to implement QUIC-Fire (Quick Urban & Industrial Complex - Fire) into their decision-making. | |||||||||||||||||||||||||
| Department Of Agriculture | Northern Research Station | USDA-094 | Analysis of prescribed fire turbulence data | Stage 2 - Development and Acquisition (AI use case is currently under development with the necessary IT tools and data infrastructure having been provisioned) | Science & Space | Pre-deployment | No | Not high-impact | The purpose of this project is to find connections between the heat from a wildfire and the turbulence it creates in the air. Current tools are not very accurate and can make mistakes. This AI effort helps create better tools that can assist fire and smoke managers in making decisions about smoke management. | The model outputs correlation analysis of how temperature change is associated with air turbulence measurements above a prescribed fire. | 06/12/2023 | Developed in-house | The model outputs correlation analysis of how temperature change is associated with air turbulence measurements above a prescribed fire. | None; | ||||||||||||||||||||
| Department Of Commerce | BEA | DOC-53 | GitHub Copilot for Code Modernization | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | BEA | DOC-1 | Meeting Transcription Summarization | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-5 | Real Time Classification for the Economic Census | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-56 | Automated Change Detection | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-57 | School staff information extraction from web page text | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-58 | Census Bureau Demographic Frame Person-Place Model | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-59 | Race and Ethnicity Autocoding | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-60 | Information extraction for web scraped data for Group Quarters frame enhancement | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-61 | Current Population Survey (CPS) Name Screening Tool | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-62 | Linkage and Matching Program (LaMP) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-63 | Census Research Exploration and Analysis Tool (CREAT) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-64 | Dr. NAICS LLM | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-65 | Automating Multilingual Census Data Processing: An AI and Transformer-Based Pipeline for Efficient Language Detection and Translation for Short-Text | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-66 | FAQ for SMaRT | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-67 | Statistical package syntax development and debugging | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-68 | DSD Python Code Translation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-69 | Census API GPT | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-70 | Natural Language Search for data.census.gov | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | Census | DOC-71 | ACES CAPEX (structures, equipment, other) Machine Learning | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | FirstNet | DOC-2 | FirstNet Authority Network Program Management Data Analytics Tool and Service | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | FirstNet | DOC-74 | FirstNet Authority Communications Topaz Labs Photo Editing | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-75 | Global Business Navigator Chatbot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-76 | Generative AI Tools Pilot - Global Markets | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-77 | Generative AI Tools Pilot - Enterprise & Solutions Architecture | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-3 | ChatGPT Enterprise | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-4 | Google AgentSpace | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-6 | Amazon Web Services NLP, Classification, Text Mining | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-7 | Google Public Sector NLP, Classification, Text Mining | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-8 | Google Colab and VertexAI | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | ITA | DOC-9 | Anthropic - Claude For Government | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-16 | Community-based Messaging with LLM | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-17 | NWS Mutual Aid Coordination | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-18 | Draft Fronts for Surface Analysis | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-19 | Enhance LSR (Local Storm Report) Creation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-20 | NWS Public Safety Language Translation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-21 | Scientific Code Development Assistance | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-23 | Evaluate AI Models for Probabilistic Hurricane Predictions | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-24 | Evaluate AI for Forecasting Fronts | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-25 | AI-Driven Global Forecast Model and Ensemble | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-26 | AI-based Biased Correction and Downscaling for Weather Models | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-27 | Improving Accuracy of Physical-based Models | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-28 | Improve Background Error Modeling for JEDI | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-29 | Enhanced Precipitation Forecasting | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-30 | Ensemble Analysis to Identify Error Sources | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-31 | Enhanced Fire Weather, Aviation, and Storm Surge Forecast Guidance | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-32 | Enhanced Flood Risk and Impact Modeling | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-33 | Fisheries ESA Section 7 Biological Opinions and EFH Consultations | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-34 | Fisheries Global Seafood Data System (GSDS) Audit Support Application | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-35 | Optics Data Processing | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-36 | Electronic Monitoring | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-37 | Passive Acoustic Monitoring | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-38 | Active Acoustics | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-39 | OLE Looker | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-40 | Grants | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-41 | Streamlining Fisheries DevSecOps with Gemini Code Assist | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-42 | ENSO and Hurricane Outlooks using observed/analyzed fields | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-43 | Drought outlooks by using ML techniques | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-44 | NN/ML for OPC probabilistic guidance and GEFS-Waves | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-45 | ProbSR (probability of subfreezing roads | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-47 | AI/ML based atmospheric physics parameterizations for numerical weather prediction | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-48 | Detecting rip currents with coastal imagery | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-49 | AI QC of water level observations | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-50 | Flowcytobot imaging system data using ML | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-51 | HABScope | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-52 | IOOS Coastal Modeling Cloud Computing Sandbox | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-54 | Classifying communitity shifts with Self Organizing Maps | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-55 | Picky | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-72 | Structure from Motion photomosaic work in SE/Caribbean | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-73 | Utilizing Machine Learning for Coral Identification at Flower Garden Banks National Marine Sanctuary | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-81 | Improving the quality of NGS's GPS on Benchmarks with machine learning | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-22 | Coastal Change Analysis Program (C-CAP) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-83 | Supporting the Development of System Resilience Indicators for Wild Rice in Lake Superior, Lake Michigan, and Lake Huron | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-85 | Great Lakes Coastal Assembly Coastal Wetland Conservation Framework | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-87 | Automated Post-Disaster Vessel and Debris Mapping from Remotely Sensed Imagery | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-89 | Machine Learning Collaboration Yields New Methods to Measure Shoreline Marine Debris | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-90 | Mussel Watch data management | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-91 | Ice seal detection and species classification in multispectral aerial imagery | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-92 | Edge AI survey payload development | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-93 | Steller sea lion automated count program | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-84 | Steller sea lion brand sighting | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-94 | Use of an Imaging Flow Cytobot for identification of phytoplankton and HABs in Alaska's Large Marine Ecosystems | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-95 | Automated classification of zooplankton images | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-96 | Acoustic and image-based habitat classification in the Gulf of Alaska using machine learning | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-98 | Automated detection and abundance estimation of salmon and pollock in Alaska's walleye pollock fishery | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-100 | Predicting annual market squid returns using machine learning methods | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-101 | AI-based automation of acoustic detection of marine mammals | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-102 | Passive acoustic analysis using ML in Cook Inlet, AK | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-103 | Automated matching of identification photographs of Cook Inlet beluga whales | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-104 | Automate detection of marine mammals and birds in still images | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-105 | Automated detections of fish and invertebrates in Habcam images | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-106 | Deep learning algorithms to automate right whale photo id | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-107 | Ropeless Geolocation System | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-108 | Capitalizing on a groundfish image library to test automated image classification in the northeast region. | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-109 | Geospatial Artificial Intelligence for Animals (GAIA) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-110 | Robotic microscopes and machine learning algorithms remotely and autonomously track lower trophic levels for improved ecosystem monitoring and assessment | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-111 | Advancing sustainable shellfish aquaculture through machine learning and automated data collection on fish communities | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-112 | Integrating AI into AUV image analysis workflow | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-113 | Development of large annotated image data sets for training detection of groundfish and benthic invertebrates | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-114 | Using CoralNet to develop subtrate detection models for AUV imagery | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-115 | AI and Machine Learning for end-to-end marine ecosystem model calibration and validation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-116 | Climate Change Impacts on the California Current Marine Ecosystem | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-117 | Puget Sound Climate Impacts on Orcas and Salmon | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-118 | Automating Anadromous Fish Counts using imaging sonar data | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-119 | VIAME: Video and Image Analysis for the Marine Environment SoftwareToolkit | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-120 | Artificial Fintelligence: Automating photo-ID of dolphins in the Pacific Islands | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-121 | Machine learning to automate review of electronic monitoring data collected from the Hawaii Longline Fisheries | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-122 | An Interactive Machine Learning Toolkit for Classifying Species Identity of Cetacean Echolocation Signals in Passive Acoustic Recordings | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-123 | Advancing the use of technology for port sampling in the US Caribbean using image analysis for length composition | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-124 | Fast tracking the use of VIAME for automated identification of reef fish | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-125 | Automation detection of sea turtles from Uncrewed Aircraft System (UAS) Surveys | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-126 | AI for automated Rice's whale call detections and soundscape sources | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-127 | Using FinFindR (computer-assisted identification of dorsal fins) for automation of photo processing. | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-128 | Developing automation in the shrimp fisheries electronic monitoring. | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-129 | Developing image library for EM collected data to ID Protected Species Bycatch. | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-130 | Developing automation in the Reefish fisheries electronic monitoring. | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-131 | Automation and detection of Marine Mammals and Turtles from AUV collected imagery. | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-132 | Integrating query learning and domain adaptation to develop robust ML algrithms to determine species and count using optical data gathered from fisheries dependent and fisheries independent data collected in the Gulf of Mexico. | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-133 | Developing deep learning models to automate age determination | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-134 | Using community-sourced underwater photography and image recognition software to study green sea turtle distribution and ecology in southern California | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-135 | Searching for large whales in UAS photographic strip transect images: developing an AI/ML object detection model using aerial photogrammetry catalogues | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-136 | Automated whale blow detections using IR cameras | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-137 | Partially automated matching of gray whales in lateral photo identification images | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-86 | BANTER, a machine learning acoustic event classifier | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-138 | California sea lion, Steller sea lion, and northern fur seal automated count program in the California current | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-139 | Uncertainties and recommendations for projecting species distributions under climate change | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-140 | Quantifying the spatiotemporal overlap of albacore with diverse fisheries and IUU risk factors in the North Pacific | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-141 | Advancing the West Coast Ocean Forecasting System through Assessment, Model Development, and Ecological Products | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-142 | Dynamic prediction system for illegal, unregulated, and unreported fishing | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-143 | Where did they not go? Considerations for generating pseudo-absences for telemetry-based habitat models | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-144 | Predictability of Species Distributions Deteriorates Under Novel Environmental Conditions in the California Current System | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-145 | Denoising Citizen Science Big-Data - Empowering Magnetic Navigation with Machine Learning | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-146 | SUVI Thematic Maps | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-147 | Use of AI/ML CNN for VIIRS cloud clearing and super resolution | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-88 | LightningCast: A lightning nowcasting model | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-148 | The Development of ProbSevere v3 - An improved nowcasting model in support of severe weather warning operations | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-149 | The VOLcanic Cloud Analysis Toolkit (VOLCAT): An application system for detecting, tracking, characterizing, and forecasting hazardous volcanic events | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-150 | Automated detection of hazardous low clouds in support of safe and efficient transportation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-151 | The Next Generation Fire System (NGFS): Automated human expert-like detection of fires in satellite imagery | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-152 | Nowcasting Extreme Fire Behavior | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-153 | Leveraging Machine Learning to Enhance the Quality of Ocean Observations | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-154 | Work with Allen Institute on developing the Ai2 Climate Emulator (ACE) for seamless weather applications, including to emulate SHiELD-based medium-range forecasts for large ensemble predictions | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-155 | Work with NMFS and NOS using AI/ML to understand how fish habitats are shaped by ocean conditions, and how changes in conditions might impact fisheries distributions and productivity. | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-156 | AI/ML techniques to infer local climate conditions based on large-scale climate drivers (i.e., empirical-statistical downscaling) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-157 | Use of AI/ML techniques to understand the factors controlling coastal hypoxia and its predictabilit | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-158 | A Hybrid Data-driven and Physics-based Framework for Atmospheric Radiative Transfer Modeling | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-159 | Detection of hardware issues in complex, wide-area computing systems based on non-intrusive workflow performance data gathered via the GFDL EPMT | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-160 | AI based Precipitation estimation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-161 | Optimization of highly-concurrent workflow task management systems based on anomaly detection using non-intrusive workflow performance data gathered via the GFDL EPMT | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-162 | Weather.gov 2.0 Rebuild and NWS API Improvement Efforts | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-163 | Utilizing Neural Operator Deep Learning to Enhance National-Scale Coastal Ocean Predictions | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-164 | Physics-Informed Neural Network Hurricane Vortex Reconstruction | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-165 | A hybrid physics-machine learning model for orographic precipitation forecasting | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-166 | Observation-Centric Estimation and Learning for Outlook Trajectories (OCELOT) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-167 | Severe Convective Weather Parameter Generation using AI/ML with Microwave/Infrared Sounder Satellite Observations for Enhanced Weather Analysis and Forecast | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-168 | AI Based Calibration/validation of satellite microwave sounder observations for Numerical Weather Prediction (NWP) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-169 | AI/ML Enterprise Cloud Mask development and operational implementation for all NOAA and international partner sensors | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-170 | AI-powered Chatbot for Federal Funding Assistance (Grants) Guidance Dissemination | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-171 | Essential Fish Habitat Consultation Efficiency Increases and Template Creation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-172 | Generative AI for Biological Opinions | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-173 | Administrative Tools - (Such as meeting/document management, reasonable accommodation needs, other administrative efficiencies such as broad code generation/translation/optimizations) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-174 | CED Generative AI Pilot Program | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-175 | GitHub CoPilot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-176 | Document existing functions | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-177 | LLM-Generated Causal Models | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-178 | SWFSC Publications Search | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-179 | Using ChatGPT4 /DALLE3, Adobe Sensei and similar for design ideation and image generation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-180 | Coastal Zone Management Act Section 312 Evaluations AI Pilot Project | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-181 | Support Chat Bot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-182 | A Study to Determine Natural Language Processing (NLP) Capabilities with the NCCF Open Knowledge Mesh | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-183 | Developing Access Capabilities for the NCCF Open Information Stewardship Service (OISS) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-184 | Digital Twin for Earth Observations Using Artificial Intelligence | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-185 | Improving Imagery Visualization using Limb-Correction and AI Resolution Enhancement for Microwave sensors | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-186 | Super-resolution of Satellite Imagery Products using Generative AI | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-187 | Ocean AI | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-188 | AI based radiative transfer emulator for data assimilation and remote sensing | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-189 | Integrating NOAA APIs with LLMs for Enhanced Access to Environmental Data and Insights | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-190 | AI Pair Programming with GitHub CoPilot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-191 | Gemini (Previously Duet AI) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-192 | AI-Driven Predictive Maintenance for the NOAA Fleet | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-193 | AI-Enhanced Emergency Response and Mission Continuity Planning | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-194 | Emissions Reduction for NOAA Fleet | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-195 | Fleet Requirements Analysis and Management Engine (FRAME) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-196 | Assissted translation of code between Matlab, R, and Python | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-197 | Storm Events Knowledge Graph Chatbot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-198 | Alma/Primo | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-199 | Amazon Q Developer Pilot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NOAA | DOC-200 | APIgee with Gemini Code Assist for OpenAPI Development | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NIST | DOC-11 | Science Data Portal Autosuggest Search | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NIST | DOC-82 | Grammarly | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NIST | DOC-10 | LLM support for NIST research (Azure) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NIST | DOC-12 | Library Market Research GenAI Tools Pilot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NIST | DOC-13 | Google Gemini | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NIST | DOC-14 | LLM support for NIST research (NIST HPC) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NIST | DOC-15 | LLM support for NIST research (Google Vertex) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NTIA | DOC-46 | WAWENETS | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NTIA | DOC-78 | Streamline Spectrum Activities | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NTIA | DOC-79 | Spectrum Visualizations | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NTIA | DOC-80 | Use of Lexis. Other searches for legal research | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NTIA | DOC-201 | Grants Program Administration - Inquiry management Chatbot Pilot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NTIA | DOC-202 | Grants Program Administration - BEAD Monitoring Plan Agent Pilot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NTIA | DOC-203 | Grants Program Administration - Tiered Environmental Assessment AI Pilot | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | NTIA | DOC-204 | NTIA Grants Portal (NGP) - AI Summarization, Analytics, and Automations | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | OS | DOC-205 | Using AI to assess the AI-readiness of Commerce data | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | OS | DOC-206 | BAS Assist | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | OS | DOC-207 | DOC Chat | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | OS | DOC-208 | USAi.gov | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | OS | DOC-209 | PRISM BidScale | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | OS | DOC-210 | Implement MS365 Copilot in OS | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-211 | Prior Art Search: AI Retrieval for Patent Search (PSAI) (Similarity Search) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-212 | Pre-Exam Application: CPC Classification | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-213 | TM Word and Image Search Tool (TWIST) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-214 | Prior Art Search: Patent Image Search | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-97 | Virtual Assistant (Public) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-215 | Pre-Exam Application: Front End Document Code Quality Control | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-99 | Pre-Exam Application: Skill Group Matching | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-216 | GenAI platform and applications for general productivity | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-217 | Prior Art Search: Automated Search AI Pilot (ASAP! Report) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-218 | First Office Action Creation: Claim Comparision for Double Patenting | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-219 | First Office Action Creation: Examination Analysis Determination/Analysis of Informalities (35 USC 101 and 112) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-220 | Patent Fraud Detection & Mitigation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-221 | Assisted software development | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-222 | Call Center Automations (Internal) | Unknown | ||||||||||||||||||||||||||||||
| Department Of Commerce | USPTO | DOC-223 | Pre-Exam Application Processing: Trademark Center (TM Center) AI Automation | Unknown | ||||||||||||||||||||||||||||||
| Department Of Energy | ANL - Argonne National Laboratory (SC43 OIM) | DOE-10 | AI for Operations Center | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Impact of system failure or erroneous results is negligible. | Natural Language Processing | Reduce human effort and increase efficiency in identifying Lessons Learned to work planning. | Increase awareness and use of Lessons Learned. | Recommended Lessons Learned documents related to proposed and ongoing Work Projects. | 01/01/2022 | Developed in house | Yes | Recommended Lessons Learned documents related to proposed and ongoing Work Projects. | ANL Operational data. | No. | No | Yes | Not applicable | Yes | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Direct usability testing | |||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-118 | Natural Language Processing | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Developed natural language processing (NLP) algorithms will be used to help categorize and understand various energy storage efforts in the R&D communities. Additionally, trends within the discovered and selected topical focus areas in energy storage | Categorize and understand energy storage efforts | Data | Data | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-120 | DOE AI Data Infrastructure System | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Leveraging generative AI and cloud-enabled data infrastructure to improve carbon capture and storage user experience | Improve connectivity and create adaptive user interface | User interface/data | User interface/data | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-125 | Creation of polymer datasets and inverse design of polymers with targeted backbones having High CO2 permeability and high CO2/N2 selectivity. | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Creation of polymer datasets and inverse design of polymers with targeted backbones having High CO2 permeability and high CO2/N2 selectivity. | Predict permeability and selectivity of polymers | Data | Data | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-150 | To use AI to calibrate the simulation model by matching simulation data with production history data. | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | To use AI to calibrate the simulation model by matching simulation data with production history data. | Calibrate simulation models | Data | Data | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-179 | Data discovery, processing, and generation using machine learning for a range of CCS data and information | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Data discovery, processing, and generation using machine learning for a range of carbon capture and storage data and information | Data compression, clustering, mapping | Data | Data | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-195 | Machine Learning for geophysical data inversion | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | R&D use case that is NOT being used to control or significantly influence a decision or outcome about individuals and does not have an approved agreement for transition into agency operations. | Classical/Predictive Machine Learning | Leak detection. | Faster/better leak detection. | Synthetic seismic and gravity data. | 26/09/2025 | Developed in house | No | Synthetic seismic and gravity data. | Seismic and gravity data, potentially geological models and leak locations for training labels. | If disclosible, data is made accessible through https://edx.netl.doe.gov/ | No | None of the above | Yes | If disclosible, data is made accessible through https://edx.netl.doe.gov/ | |||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-218 | To help automate data discovery and preparations to support a range of CS models, tools, and products | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Natural Language Processing | To help automate data discovery and preparations to support a range of CS models, tools, and products | Automate data dicovery | Data | Data | ||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-328 | ORNL: Foundational AI Research | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Generative AI | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-329 | ORNL: AI for Materials Science | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Generative AI | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-330 | ORNL: AI for Experimental Facilities Operations | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Agentic AI | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-331 | ORNL: AI for Transportation and Mobility | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-333 | ORNL: AI for Energy Generation and Distribution | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-334 | ORNL: AI for Advanced Manufacturing | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | improve manufacturing | improve manufacturing | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-336 | ORNL: AI for Bio and Health Sciences | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Generative AI | improve health | improve health | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-337 | ORNL: AI for the Smart Laboratory | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Agentic AI | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-338 | ORNL: AI for Neutron Science | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-339 | ORNL: AI for Earth Systems | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | a) High-impact | High-impact | Generative AI | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-340 | ORNL: AI for Fusion Energy | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | accelerate scientific discovery | accelerate scientific discovery | prediction and classification | prediction and classification | |||||||||||||||||||||
| Department Of Energy | IM-60 - IM Enterprise Operations and Shared Services (IM) | DOE-347 | Network Security and Analysis | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | It does not meet the requirements to be high impact | Classical/Predictive Machine Learning | behavior of attack, triage | Vectra AI is a Structured and Unstructured machine learning (ML) and Security -Led Artifical Intelligence (AI) tool used to detect patterns, anomalous or previously unseen activities inside petabytes of network and log data within the DOE HQ EITS networking boundary and cloud environments. | Prediction: The Vectra AI Platform with Attack Signal Intelligence uses AI to analyze the behavior of attackers, automatically apply triage, correlate, and prioritize each security event or incident. | 05/12/2019 | Purchased from a vendor | Vectra | Yes | Prediction: The Vectra AI Platform with Attack Signal Intelligence uses AI to analyze the behavior of attackers, automatically apply triage, correlate, and prioritize each security event or incident. | Network Data flow and system logs | No | No | No | Vendor owned - code not available. | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-349 | Advancing Market-Ready Building Energy Management by Cost-Effective Differentiable Predictive Control | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Reinforcement Learning | Enhance building energy management with predictive control, safety verification, and optimization. | Lowers building energy costs while ensuring safe, resilient operations. | Lowers building energy costs while ensuring safe, resilient operations. | 01/10/2023 | Developed in house | Yes | Lowers building energy costs while ensuring safe, resilient operations. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-350 | Adaptive Cyber-Physical Resilience for Building Control Systems | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Reinforcement Learning | How to maintain efficient, reliable, and secure operation of building control systems in the face of disruptions, changing conditions, or cyber-physical threats. | Improves drug safety and reduces adverse health impacts in vulnerable populations. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | PassiveLogic | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-351 | Elucidating Genetic and Environmental Risk Factors for Antipsychotic-induced Metabolic Adverse Effects Using AI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Classical/Predictive Machine Learning | Identifying individuals at higher risk for adverse metabolic effects from antipsychotic medications through predictive modeling of genetic and environmental data. | Improves drug safety and reduces adverse health impacts in vulnerable populations. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-352 | APT Analytics | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Classical/Predictive Machine Learning | Automate analysis of atom probe tomography (APT) data for faster scientific insights. | Speeds up materials research through automated nanoscale data analysis. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | Microsoft | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-353 | AI used for predictive modeling and real time control of traffic systems | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Reinforcement Learning | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | Reduces traffic congestion, energy use, and greenhouse gas emissions. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | Microsoft | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-354 | Laboratory Automation | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Computer Vision | Automate SEM/TEM data acquisition by identifying regions of interest with machine learning. | Increases efficiency and throughput of scientific imaging and analysis. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-355 | Scalable, Efficient and Accelerated Causal Reasoning Operators, Graphs and Spikes for Earth and Embedded Systems (SEA-CROGS) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Natural Language Processing | Improving Information Access, Understanding, and Productivity through Language Automation | Accelerates scientific discovery and next-gen computing for earth and embedded systems. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | Microsoft | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-356 | Physics-Informed Learning Machines for Multiscale and Multiphysics Problems (PhILMs) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Classical/Predictive Machine Learning | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | Promotes sustainable, economically viable waste-to-energy transitions. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-357 | Managing curb allocation in cities | Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Classical/Predictive Machine Learning | Manage curb space dynamically in cities to address rising demand for curbside parking. | Improves urban mobility and equitable access to curb space. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-358 | Regional waste feedstock conversion to biofuels | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Classical/Predictive Machine Learning | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | Promotes sustainable, economically viable waste-to-energy transitions. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-359 | Use of developed ML techniques to parse opensource text-based information. | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Natural Language Processing | Parse open-source text to define disadvantaged communities for energy transition planning. | Informs equitable energy transition policies for disadvantaged communities. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | Microsoft | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-360 | AI techniques for identification of suitable delivery parking spaces in an urban scenario | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Classical/Predictive Machine Learning | Identify optimal urban delivery parking spaces to support EV freight adoption. | Supports sustainable freight delivery and electric vehicle adoption. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | Cisco | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-361 | Surrogate models for probabilistic Bayesian inference | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Classical/Predictive Machine Learning | Estimate unknown model parameters using surrogate models for probabilistic Bayesian inference. | Enables faster, more reliable insights from complex physical models. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | SLAC - SLAC National Accelerator Laboratory (SC43 OIM) | DOE-410 | AI for system design optimization (e.g., detector, accelerator) | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | This AI system does not meet the criteria of any of the six pillars that make up High Impact AI in Memorandum M-25-21 | ||||||||||||||||||||||||||
| Department Of Energy | WAPA - Western Area Power Administration (PMA) | DOE-425 | FIMS Invoice BOT | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Employee Reimbursements and Purchase Power processes | Employee Reimbursements and Purchase Power processes | Employee Reimbursements and Purchase Power processes | |||||||||||||||||||||||
| Department Of Energy | PM HQ - Office of Project Management (PM) | DOE-426 | PARSGPT | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | AI is used for Project Management, internal investigations and audits. | Generative AI | Project Management internal investigations and audits | The value-add is derived from providing an accessible way for PM Analysts to safely interact with LLM technology. | Free form text response to questions (Chatbot) | 22/05/2025 | Developed in house | Yes | Free form text response to questions (Chatbot) | Data is not required to be reported. | No | Yes | Code is not open source. | |||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-427 | AI Incubator Sandbox | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Agentic AI | Provide a secure, multimodal AI chatbot sandbox for experimentation without internet access. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | Yes | ||||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-430 | ServiceNow Predictive Intelligence | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | General purpose business system | Generative AI | Improve helpdesk efficiency and data quality | Provide better and more consistent classification of ticket data entered into ServiceNow | Field classification data | 01/01/2024 | Purchased from a vendor | ServiceNow (SAAS hosting provider) | Yes | Field classification data | Existing ticket data is used to train the model with data and training servers stored within FedRAMP High data centers where ServiceNow is hosted | No | None of the above | No | Yes | Positive impact on laboratory cost/time efficency for helpdesk staff | Yes – by an agency AI oversight board not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-431 | AI-Enhanced Lab Assist | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Detect trends in lab planning/control data to improve efficiency and knowledge sharing. | Integrating lessons learned into Lab Assist Activity Planning to enhance operational efficiency, improve information sharing leveraging best practices, and foster a culture of continuous improvement. | Leverage AI for trend detection in working planning and control data. | 01/10/2024 | Developed in house | Yes | Leverage AI for trend detection in working planning and control data. | No | No | No | None of the above | Yes | ||||||||||||
| Department Of Energy | SWPA - Southwestern Power Administration (PMA) | DOE-432 | SWPA Generative AI | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | This was a testing instance and does not meet the defition of High Impact. This AI instance was not trained on any production data. | No | No agency data is used for training | No | No | ||||||||||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-433 | Microsoft 365 Copilot (Productivity Suite) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Automate and streamline productivity tasks in Microsoft 365 apps for staff efficiency. | This helps PNNL streamline workflows, improve efficiency, and allow researchers to focus more on innovation and less on administrative tasks, ultimately accelerating scientific research and operational effectiveness. | Using Microsoft 365 Copilot, PNNL aims to produce enhanced document quality, increased efficiency, insightful data analysis, improved collaboration, automated workflows. | 01/10/2024 | Purchased from a vendor | Microsoft | Yes | Using Microsoft 365 Copilot, PNNL aims to produce enhanced document quality, increased efficiency, insightful data analysis, improved collaboration, automated workflows. | No | No | No | None of the above | Yes | |||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-434 | NLCOO AI for Lessons Learned tool | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | General purpose business system | Natural Language Processing | Provide best Lessons Learned based on the Problem a user is trying to address. | Better search to gain insights from exisitng lessons learned to improve how we do work. | Search list of relevant documents | 01/01/2025 | Developed in house | No Vendor Involved | Yes | Search list of relevant documents | Not trained on any agency or LANL data | No | None of the above | Yes | Yes | This app has made it very easy to identify lessons learned from across the enterprise. | Agency CAIO has waived this minimum practice and reported such waiver to OMB | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | General solicitations of comments from the public | ||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-435 | Spot for automated sensing, inspection, and capture | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Computer Vision | Conduct automated sensing, inspection, and data capture in challenging environments with robots. | Spot, a mobile robot, can navigate hazardous or hard-to-reach areas to perform inspections and gather precise data with its advanced sensors and cameras. | Improving efficiency, and the accuracy of sensor research data. | 01/10/2024 | Developed with both contracting and in-house resources | Boston Dynamics | Yes | Improving efficiency, and the accuracy of sensor research data. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-436 | Microsoft Power Platform capability that provides AI to automate processes in Power Apps and Power Automate | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Automate workflows and processes within Microsoft Power Apps and Power Automate. | By leveraging AI for automation, PNNL can automate routine tasks such as data entry, reporting, and workflow management, freeing up researchers and staff to focus on higher-value activities. | Enhances productivity, reduces human error, and leads to more efficient management of research projects and resources. | 01/10/2024 | Purchased from a vendor | Microsoft | Yes | Enhances productivity, reduces human error, and leads to more efficient management of research projects and resources. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | HC HQ - Office of the Chief Human Capital Officer (HC) | DOE-437 | CAISY | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | CAISY is not designated HIGH IMPACT in reards to definitions indicated in OMB MEMO M-25-21 | Reinforcement Learning | The feature's main objective is to mimic real-life situations in the workplace using predefined scenarios | 1. CAISY provides interactive, scenario-based content powered by AI where a learner can practice new skills in a safe space. This AI simulation content allows learners to choose a role, practice specific skills by responding to AI prompts, and receive adaptive, personalized feedback to guide their development 2. The user is introduced by an avatar that is generated with AI text-to-video. After the introduction, the learner can either interact by typing or using speech-to-text (STT) and text-to-speech (TTS) services for more immersive and natural interaction. When the conversation is over, the learner will receive a rating and evaluation | speech-to-text (STT) and text-to-speech (TTS) services for more immersive and natural interaction. | 21/08/2024 | Purchased from a vendor | SKILLSOFT | Yes | speech-to-text (STT) and text-to-speech (TTS) services for more immersive and natural interaction. | None of the above | No | N>A | No | N>A | |||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-438 | ServiceNow Virtual Agent | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | ServiceNow's AI output is NOT a principal driver of a government decision that would meaningfully affects people's rights, safety, critical access, or strategic assets. AI in ServiceNow is strictly used to help manage IT resources for LANL. | Natural Language Processing | Provide chatbot services to help customers resolve issues or open service requests that do not require human intervention | Improved automation | Recommendation | 01/11/2024 | Purchased from a vendor | ServiceNow | Yes | Recommendation | Existing ticket data is used to train the model with data and training servers stored within FedRAMP High data centers where ServiceNow is hosted | No | None of the above | No | Yes | Minimal impact as AI is used purely for predictive purposes to aid trained technicians. Humans remain the decision makers whether to use the AI's output. | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Yes, an appropriate appeal process has been established | Other | ||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-439 | Databricks AI for Cloud Data Warehouse | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Streamline AI/ML solution building and governance in a unified cloud data warehouse. | Efficiency in analytics and deployment of AL/ML models | Recommendation based on analytic input | 01/02/2025 | Purchased from a vendor | Databricks | Yes | Recommendation based on analytic input | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | EHSS HQ - Office of Environment Health Safety and Security (EHSS) | DOE-440 | DOE Technical Standards | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | The use of AI in this office does not serve as the principal basis for decisions or actions that have legal, material, binding, or significant effect on rights or safety. Its use is to enhance quality and technical accuracy | Generative AI | Aid employees in improving the quality and technical accuracy of their work products. | Enhance quality and accuracy of technical standards, supporting DOE's commitment to safety excellence. | Recommendations and feedback for improvement | 02/10/2025 | Developed in house | EnerGPT | No | Recommendations and feedback for improvement | None. | No | No | None. | The impacts are that the inputs feeding the answers generated by AI are not accurate and, therefore, lead to inaccurate output. For this reason, all use of AI by EHSS-11 is thoroughly vetted and checked by employees prior to consideration for use. | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-441 | Microsoft Copilot for Security | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Strengthen cybersecurity with AI-driven threat detection, response, and vulnerability management. | Proactively identify, mitigate, and respond to threats. Copilot assists in real-time monitoring, threat detection, incident response automation, and vulnerability management. | The intended outputs of using Microsoft Copilot for Security at PNNL include real-time threat detection, automated incident response, enhanced data protection, compliance reports, security insights, reduced cyber risk, and sustained operational continuity | 01/10/2024 | Developed with both contracting and in-house resources | Microsoft | Yes | The intended outputs of using Microsoft Copilot for Security at PNNL include real-time threat detection, automated incident response, enhanced data protection, compliance reports, security insights, reduced cyber risk, and sustained operational continuity | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | KCNSC - Kansas City National Security Campus (KCFO) | DOE-442 | Text To Speech Audio Generation | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Does not match any of the identifed categories in M-25-21. Is used to generate audio in courses that do not have a direct impact to safety/data security. | Generative AI | Inefficiencies with generating audio for training materials that does not include CUI, UCNI or Class material. Allows for quick generation and updates to course and video audio | Reduction of time to create and modify training courses to ensure qualification of employees | MP3 files incoorperated into videos/courses | 16/09/2022 | Purchased from a vendor | Wellsaid | Yes | MP3 files incoorperated into videos/courses | None, we do not train the model | No | None of the above | No | ||||||||||||
| Department Of Energy | NREL - National Renewable Energy Laboratory (EE) | DOE-443 | Lex Natural Language Interface | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | This AI use case is strictly used for general purpose generative AI functionality. It is internal only, and no data is shared outside of NREL. | Generative AI | Find insights in a database with a LLM summarizing the results | Create an efficient and user-friendly system that enables users to query project data, such as funding, AUs (allocation units), project focus, and fiscal years, with natural language prompts. | The system executes a query against a postgres database and displays an LLM-generated textual summary of returned query records. The system also displays the LLM generated queries. | 01/06/2025 | Developed in house | Microsoft | Yes | The system executes a query against a postgres database and displays an LLM-generated textual summary of returned query records. The system also displays the LLM generated queries. | Structured data from a PostgreSQL database containing information about HPC (High-Performance Computing) projects. | na | No | Yes | ||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-444 | Copilot Studio | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | a) High-impact | High-impact | Doesn't meet criteria. | Generative AI | Enhancing day to day processes. | 1) Enhancing employee productivity and efficiency. | Customization Low-Code Development GPT-Based Capabilities Analytics Entities and Variables | 11/03/2024 | Purchased from a vendor | Microsoft | Yes | Customization Low-Code Development GPT-Based Capabilities Analytics Entities and Variables | no | Yes | No | na | In-Progress | None Identified | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Development of monitoring protocols is in-progress | Yes, sufficient and periodic training has been established | Yes | Not applicable | Other | ||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-446 | Scopus AI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Used for general purpose generative AI within a COTS product. | Other | Not available | Assist researchers by reducing time to find applicable research while increasing quality and accuracy of identified hits. | Research citations, abstracts and other summaries. | Developed with both contracting and in-house resources | Not available | No | Research citations, abstracts and other summaries. | None. Scopus AI uses publicly available journal abstracts, no agency data is used. | Not available | No | No | Not available | ||||||||||||
| Department Of Energy | IM-50 - Architecture Engineering Technology and Innovation (IM) | DOE-449 | EnerGPT | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | It does not meet the definition defined by OMB. | Generative AI | Improves efficiency of DOE staff. | EnerGPT aims to enhance user productivity and reduce time spent on redundant tasks . | EnerGPT generates answers to user questions. | 02/09/2024 | Developed with both contracting and in-house resources | Accenture Federal Services, Google | Yes | EnerGPT generates answers to user questions. | Google's Gemini familiy of models. | No | None of the above | No | ||||||||||||
| Department Of Energy | EHSS HQ - Office of Environment Health Safety and Security (EHSS) | DOE-450 | MAPPRITE | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | MAPPRITE's AI does not provide outputs that serve as a principal basis for decisions or actions with legal, material, binding, or significant effects. | Generative AI | (1) The implemented AI will help automate data mining, ingesting, and indexing of existing disparate organizational data sources information for relevant safeguards and security (S&S) support information; and the (2) expected benefits will be to help | (1) The implemented AI will help automate data mining, ingesting, and indexing of existing disparate organizational data sources information for relevant safeguards and security (S&S) support information; and the (2) expected benefits will be to help improve EHSS-51 business workflows for researching potentially relevant S&S support data available such that the information will be accessible and searchable by policy subject matter specialists for awareness and additional context for strategic decision-making and policy management | In its full implementation phase, the application's AI output will provide S&S policy [support] data available such that the information will be accessible and searchable by policy subject matter specialists for strategic decision-making and policy management without having to manually search through hundreds of sources for relevant information. | 23/11/2025 | Developed with both contracting and in-house resources | Special Technologies Laboratory (STL) | Yes | In its full implementation phase, the application's AI output will provide S&S policy [support] data available such that the information will be accessible and searchable by policy subject matter specialists for strategic decision-making and policy management without having to manually search through hundreds of sources for relevant information. | Department of Energy Directives. Requirement source documents, such as statutes, regulations and standards were also provided to the development team to assist with ingesting content to the AI model via AWS Kendra. | Yes | Yes | |||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-451 | ServiceNow AI Search | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Doesn't meet the criteria | Generative AI | Intelligent query features help you quickly find the answers ServiceNow users need. | 1) Enhancing employee productivity and efficiency. | AI Search includes search features that help users find the answers they need. Query for indexed terms and phrases. Control query logic with Boolean operators. Match a range of indexed terms using wildcard operators. AI Search provides users with clear answers for their search queries. | Yes | AI Search includes search features that help users find the answers they need. Query for indexed terms and phrases. Control query logic with Boolean operators. Match a range of indexed terms using wildcard operators. AI Search provides users with clear answers for their search queries. | No | No | |||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-452 | Copilot for Microsoft 365 | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | a) High-impact | High-impact | Generative AI | Enhancing day to day processes. | 1) Enhancing employee productivity and efficiency. | Smart documentation Creation Efficient Meeting Management Data Insights and Analysis Security and Compliance | 11/03/2024 | Purchased from a vendor | Microsoft | Yes | Smart documentation Creation Efficient Meeting Management Data Insights and Analysis Security and Compliance | Microsoft 365 data | not applicable | Yes | No | not applicable | In-Progress | None Identified | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Development of monitoring protocols is in-progress | Yes, sufficient and periodic training has been established | Yes | Not applicable | Other | ||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-453 | MI8 Collimators Surogate Model | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project is working to create a ML surogate model of the exisitng MI8 collimation system. The purpose of the ML model is to aid in the tuning of the collimation system for acclerator operations and to help find more optimal settings in a timely m | This project is working to create a ML surogate model of the exisitng MI8 collimation system. The purpose of the ML model is to aid in the tuning of the collimation system for acclerator operations and to help find more optimal settings in a timely manner. We hope to extend these techniques to other sub-systems and also a new MI8 collimation system being installed. | The ML outputs of the system are predictions of collimation system performance given collimation system settings and beam charecteristics. | 25/09/2025 | Developed in house | No | The ML outputs of the system are predictions of collimation system performance given collimation system settings and beam charecteristics. | Accelerator operations machine data | No | Yes | unknown | |||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-454 | AI for High Risk Property | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | a) High-impact | High-impact | Generative AI | 1) Regulatory compliance. 2) Improved identification of high-risk property items and increased productivity. | 1) Regulatory compliance. 2) Improved identification of high-risk property items and increased productivity. | Decision for high-risk property categorization | Yes | Decision for high-risk property categorization | No | Yes | ||||||||||||||||||
| Department Of Energy | SEPA - Southeastern Power Administration (PMA) | DOE-455 | Records Digitization | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | As part of the records digitization process, SEPA is leveraging AI and ML to enhance metadata tagging and quality. | Natural Language Processing | Enhance lookup of agency records and electronic documents | SEPA is leveraging AI and ML to enhance metadata tagging and quality control during the records digitization process. | Acurate metadata assignment in accordance with SEPA's NARA-approved file plan/records schedule. | Acurate metadata assignment in accordance with SEPA's NARA-approved file plan/records schedule. | ||||||||||||||||||||
| Department Of Energy | SLAC - SLAC National Accelerator Laboratory (SC43 OIM) | DOE-456 | Machine Learning components within Splunk Enterprise Security | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | This AI system not meet the criteria of any of the six pillars that make up High Impact AI in Memorandum M-25-21 | Classical/Predictive Machine Learning | Clustering and Classification events | Improved automation of security threat hunting | Prediction of expected norms of log events | 01/09/2020 | Purchased from a vendor | Splunk | No | Prediction of expected norms of log events | Security Log Events | No | No | |||||||||||||
| Department Of Energy | SLAC - SLAC National Accelerator Laboratory (SC43 OIM) | DOE-458 | Machine Learning components within CrowdStrike | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | This AI system not meet the criteria of any of the six pillars that make up High Impact AI in Memorandum M-25-21 | Classical/Predictive Machine Learning | CrowdStrike uses machine learning to review security events in order to create notifications of detections and incidents for the SLAC Cybersecurity team. | From the use of CrowdStrike's machine learning components, SLAC receives the benefit of visibility to analyze possible security events | CrowdStrike Detections/Incidents | 27/01/2021 | Purchased from a vendor | CrowdStrike | Yes | CrowdStrike Detections/Incidents | CrowdStrike's machine learning model is trained on data generated by activity at SLAC | No | None of the above | No | ||||||||||||
| Department Of Energy | NREL - National Renewable Energy Laboratory (EE) | DOE-460 | AskOEDI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Tool is designed as an AI research assistant to help users find answers to questions about specific datasets beyond simple keyword searches. Disclaimers provided with the tool state that it should not be used for strategic decision making, nor actio | Generative AI | AskOEDI serves as a virtual research assistant to OEDI users. It provides answers to a variety of user-provided questions using natural language processing and generative machine learning. Users can get answers to questions about specific datasets, | Making data more accessible and user-friendly for the public. | The systems leverages Retrieval-Augmented Generation to find semantically relevent content which the AI (LLM) summarizes for the end user as a method to describe relevent content within the data catalog. | 01/10/2025 | Developed in house | AWS, Azure, OpenAI | Yes | The systems leverages Retrieval-Augmented Generation to find semantically relevent content which the AI (LLM) summarizes for the end user as a method to describe relevent content within the data catalog. | As this is summarizing data from the OEDI data repository the related datasets are described by the catalog which is also used to validate the AI responses: https://data.openei.org/ | https://data.openei.org | No | Yes | ||||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-462 | Hanford Search | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | To be retired | The purpose of the Hanford Search is to provide similar functionality to our Hanford Search application without needing multiple applications. The benefits of the AI are that there is a single interface with multiple uses and the AI can provide better, more relevant search results. Additionally, users can ask questions in natural language instead of needing to input specific search criteria. | The system outputs text respones from user prompts requesting information on grounded data related to Hanford Search Index | Yes | The system outputs text respones from user prompts requesting information on grounded data related to Hanford Search Index | Our Data does not train the models. | No | Yes | |||||||||||||||||||
| Department Of Energy | PPPL - Princeton Plasma Physics Laboratory (SC43 OIM) | DOE-463 | AI Chat Bot for IT User Services | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Interactive RAG chat bot model trained on existing, updated and new IT resource documentation in the user space. Platform will be used as an informative method for users to handle tier 1 IT issues and help guide to the correct place. | Generative AI | Interactive RAG chat bot model trained on existing, updated and new IT resource documentation in the user space. Platform will be used as an informative method for users to handle tier 1 IT issues and help guide to the correct place. | Interactive RAG chat bot model trained on existing, updated and new IT resource documentation in the user space. Platform will be used as an informative method for users to handle tier 1 IT issues and help guide to the correct place. | AI output will be recommendations and instructions based on training data from IT administrators in more user friendly responses | 25/09/2025 | Developed in house | No | AI output will be recommendations and instructions based on training data from IT administrators in more user friendly responses | Help Desk Knowledge Base Article and other supporting documentation in the user space | No | No | No | In-Progress | ||||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-464 | Hanford Popfon Search | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | See question 7 - use case retired. | The purpose of the Hanford Popfon Search is to provide similar functionality to our employee look-up application without needing multiple applications. The benefits of the AI are that there is a single interface with multiple uses and the previous application, which is older in architecture, can be retired, providing a safer, more secure, and cost effective alternative. Additionally, users can ask questions in natural language instead of needing to input specific search criteria. | The system outputs text respones from user prompts requesting information on grounded data related to employee contact and organization information | Yes | The system outputs text respones from user prompts requesting information on grounded data related to employee contact and organization information | Our Data does not train the models. | No | Yes | |||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-466 | AI for Intelligent Automation | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Used for general purpose generative AI. | Generative AI | workplace automation | Improve the timeliness and quality of manual work processes through automation where generative AI can make comparisons and decisions using INL procedures and controlled documents, with humans performing final validation and approval. | Completion of forms for human validation and approval. | Developed with both contracting and in-house resources | Not available | Yes | Completion of forms for human validation and approval. | At this time, INL plans to use non-CUI data with this solution, including the employee handbook, approved controlled documents, and other material that will assist workers in completing processes and activities. RAG (mini RAG preferred) is the method that will be used for integration with the AI solution. | Not available | Yes | Yes | Not available | ||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-467 | Cyber Threat Enrichment | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Not available. | Other | Cyber threat analysis | Enrich emerging cyber threat data with recent and past analysis data curated from the DOE-CESER Geo Threat Observable project and grid modernization project Deep Learning Malware using TBs of well structured cyber threat. Input vulnreability, weakness, malware inforamtion to find connections to well analyzed cyber threat including attack patterns, known exploited, spast mitigations and detections. When firmware binaries are analyzed and translated to structured threat used to create codified attack surfaces and Firmware or Software Bill of Materials (SBOM) for supply chain tracking | Structured Threat Information Expression (STIX) data format providing actionable and implementable codified contextual data for use in cyber security products or if firmware binaries analyzed and translated to STIX output is codified attack surfaces and SBOM. | Developed in house | No | Structured Threat Information Expression (STIX) data format providing actionable and implementable codified contextual data for use in cyber security products or if firmware binaries analyzed and translated to STIX output is codified attack surfaces and SBOM. | Open source threat intelligence collected, NLP used to scrape information off of cyber incident reports and websites, some data from cyber sensors, threat feeds and some data from manual threat analysis activities. | Not available. | No | Yes | Not available. | |||||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-469 | Hanford Ai Liaison (HAL) 1.1 | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | This AI system does not provide a principle basis for decisions or actions with legal, material binding, or have any significat effects of those items listed in OMB-25-21 in the definition of High Impact. | Generative AI | productivity efficiency | cost savings, increased efficiency, increased productivity, greater analytics of data | text answers to input questions | 28/10/2024 | Developed in house | Yes | text answers to input questions | Pre-trained from OpenAI | Not applicable | No | Not applicable | None of the above | Yes | Not applicable | Yes | Not applicable | Potential impacts were assessed by the Hanford AI SME | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |
| Department Of Energy | GDO HQ - Grid Deployment Office (GDO) | DOE-472 | Argonne Resilience AI Assistant ARAIA (nee CALLM) | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | The ARAIA (nee CALLM) project will have substantial impact in meeting current administration priorities for a secure and resilient power grid by ensuring that risks to the grid can be mitigated through planning, capital resource allocation | Generative AI | meeting current administration priorities for a secure and resilient power grid | The AI, the Argonne Resilience AI Assistant (ARAIA) (nee CALLM - Climate Action through Large Language Models), ARAIA is on track to meet the needs and outcomes outlined in the initial project scope and work plan, but is now better positioned to meet future needs and administration goals. addresses the problem of communicating complex climate projections and scientific literature to a broad audience, particularly electric sector stakeholders. It simplifies this information to help these stakeholders identify climate resilience solutions. Expected benefits include improved communication of climate science, empowering stakeholders to directly address climate change impacts, and accelerating the scalability to serve a wider range of users. This ultimately leads to more effective resilience planning and potentially cost savings through better informed decision-making. Users can interact with the system to retrieve specific data, such as fire weather indices, and receive actionable recommendations on areas like hazard mitigation planning, infrastructure wildfire risk, and comprehensive wildfire impact. This project represents a significant step forward in integrating cutting-edge AI with our resilience planning efforts, ultimately helping communities and decision-makers mitigate the impacts of natural hazards. | The AI output of Argonne Resilience AI Assistant (ARAIA) is information synthesized from complex climate projections and scientific literature, presented in a simplified and accessible format. This output helps users understand potential climate impacts and identify appropriate climate resilience solutions. The information is grounded in vetted data and published research to minimize inaccuracies and hallucinations common in large language models. The output could range from summaries of climate-related risks, to lists of potential adaptation strategies tailored to specific situations, depending on the user's input and the function of the system it's integrated with (like ClimRR). | No | The AI output of Argonne Resilience AI Assistant (ARAIA) is information synthesized from complex climate projections and scientific literature, presented in a simplified and accessible format. This output helps users understand potential climate impacts and identify appropriate climate resilience solutions. The information is grounded in vetted data and published research to minimize inaccuracies and hallucinations common in large language models. The output could range from summaries of climate-related risks, to lists of potential adaptation strategies tailored to specific situations, depending on the user's input and the function of the system it's integrated with (like ClimRR). | Argonne Resilience AI Assistant (ARAIA) tool utilizes vetted climate data and published climate resilience literature to train, fine-tune, and evaluate its performance. The specific datasets are not detailed here, but the approach emphasizes the use of established climate science information to ground the model's responses and mitigate inaccuracies. | No | |||||||||||||||||
| Department Of Energy | ANL - Argonne National Laboratory (SC43 OIM) | DOE-474 | WCD-AI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Impact of system failure or erroneous results is negligible. | Natural Language Processing | Recommend keyword-based search for relevant Lessons Learned published on DOE OPEXShare. | Recommend keyword-based search for relevant Lessons Learned published on DOE OPEXShare. | Recommended keywords for search based on user-authored Work Control Document. | 01/01/2023 | Developed in house | Yes | Recommended keywords for search based on user-authored Work Control Document. | Existing Work Control Document records in database system. | No | No | Yes | Not applicable | ||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-476 | AI for Isotopes (Pellet) Inspection | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | a) High-impact | High-impact | Computer Vision | Safety for employees | 1) Reduce technician radiation exposure. 2) Increased productivity. | Recommendation regarding pellet quality | Yes | Recommendation regarding pellet quality | No | Yes | ||||||||||||||||||
| Department Of Energy | ANL - Argonne National Laboratory (SC43 OIM) | DOE-477 | OPQ-AI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Impact of system failure or erroneous results is negligible. | Classical/Predictive Machine Learning | Semi-automated person ID matching with existing database before new accounts are created. | Significantly reduces person-hours for manual review of incoming people registrations to match with existing database records by recommending most likely matches, if an existing record is identified that matches with the registration details. | Recommendation of matched person record that already exists or that a new person record should be created. | 01/03/2021 | Developed in house | Yes | Recommendation of matched person record that already exists or that a new person record should be created. | Data includes read-only access internal person/HR records existing in current database systems. | No | No | Yes | Not applicable | Yes | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Direct usability testing | |||||||
| Department Of Energy | IM-50 - Architecture Engineering Technology and Innovation (IM) | DOE-479 | Funding Finder | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | It is retired. | The Funding Finder will aggregate FOAs from different DOE sources and enables users to ask questions when identifying opportunities and developing proposals. | Answers to questions about DOE FOAs. | Answers to questions about DOE FOAs. | |||||||||||||||||||||||
| Department Of Energy | IM-50 - Architecture Engineering Technology and Innovation (IM) | DOE-481 | PDF Analyzer | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not meet the requirements defined by OMB. | Generative AI | Summarization and knowledge retrieval. | PDF Analyzer will enable teams across the DOE to upload large PDFs and ask questions and generate content related to those PDFs. | PDF Analyzer will output the answers to a user's question along with the relevant sections of the PDF that the answer is based on. | 02/09/2024 | Developed with both contracting and in-house resources | Accenture Federal Services, Google | Yes | PDF Analyzer will output the answers to a user's question along with the relevant sections of the PDF that the answer is based on. | Google's Gemini family of models. | No | None of the above | No | ||||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-482 | Hanford Service Ticket Lookup | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | See question 7 - use case retired. | The purpose of the Hanford Service Ticket Lookup is to provide a single interface with for customers to ask questions and get to service tickets without having to navigate extensive menu's, tool bars, and search functions. Eventually, this will include service tickets from multiple platforms, providing the customer with a single interface to do all things service request related. Additionally, users can ask questions in natural language instead of needing to input specific search criteria. | The system outputs text respones from user prompts requesting information on grounded data related to Service Ticket Requests | Yes | The system outputs text respones from user prompts requesting information on grounded data related to Service Ticket Requests | Our Data does not train the models. | No | Yes | |||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-483 | INL AI Virtual Assitant (AiVA) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Used as a general/business chat agent for genAI. | Generative AI | work productivity | This chatbot uses commercial chatgpt-like capability to answer questions, provide coaching on processes, summarize and improve communications, and produce code in a variety of formats. INL has been authorized and has planned activiites in 2025 to begin adding internal INL non-CUI data using RAG. Examples include the employee handbook and approved controlled documents. | Outputs are consistent with commercial chatbot products, such as ChatGPT. | Developed in house | Yes | Outputs are consistent with commercial chatbot products, such as ChatGPT. | At this time, INL plans to use non-CUI data with this solution, including the employee handbook, approved controlled documents, and other material that will assist workers in completing processes and activities. RAG (mini RAG preferred) is the method that will be used for integration with the AI solution. | Not available | No | Yes | Not available | |||||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-485 | Unleashing AI Transformer Models on FPGAs for Accelerating LHC and Particle Physics | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project centers on the deployment of Transformer models for Field Programmable Gate Arrays (FPGA), in order to seamlessly integrate AI capabilities into particle physics experiments, specifically focusing on the L1 triggering schemes and real-ti | This effort focuses on Transformer models for representation learning on Field Programmable Gate Arrays (FPGA), in order to seamlessly integrate AI capabilities into particle physics experiments, specifically focusing on the CMS level-1 (L1) trigger at the High-Luminosity LHC (HL-LHC) and real-time magnet quench detection. While conventional methods for event identification have limitations, modern AI and machine learning techniques offer superior alternatives. | This AI system has two fold use cases, represnetation learning for LHC Trigger and multi-modal magnet quench detection algorithms. | 25/09/2025 | No | This AI system has two fold use cases, represnetation learning for LHC Trigger and multi-modal magnet quench detection algorithms. | research datasets from scientific experiments | No | Yes | |||||||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-486 | Hanford Procedure Search | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | To be retired | The purpose of the Hanford Search is to provide greater service in our customers search for relevant proceedures, which is a main look-up for many of our employees. The benefits of the AI are that there is a single interface with multiple uses and the AI can provide better, more relevant search results. Additionally, users can ask questions in natural language instead of needing to input specific search criteria. | The system outputs text respones from user prompts requesting information on grounded data related to the Hanford Procedure System | Yes | The system outputs text respones from user prompts requesting information on grounded data related to the Hanford Procedure System | Our Data does not train the models. | No | Yes | |||||||||||||||||||
| Department Of Energy | NREL - National Renewable Energy Laboratory (EE) | DOE-487 | LLM EV | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Not intended to produce outputs that are used as principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety | Generative AI | Large-scale policy analysis to understand permitting barriers to the deployment of electric vehicle charging infrastructure | Making NREL EV-specific data more accessible for researchers and accelerating research in this field. | Outputs analysis results from these studies: https://www.sciencedirect.com/science/article/pii/S2666546824000971 | 01/10/2024 | Developed in house | Azure, OpenAI | Yes | Outputs analysis results from these studies: https://www.sciencedirect.com/science/article/pii/S2666546824000971 | TBD | No | Yes | TBD | ||||||||||||
| Department Of Energy | NREL - National Renewable Energy Laboratory (EE) | DOE-488 | AskPRIMR | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Tool is designed as an AI research assistant to help users find answers to questions about specific datasets beyond simple keyword searches. Disclaimers provided with the tool state that it should not be used for strategic decision making. | Generative AI | The U.S. Department of Energy's Portal and Repository for Information on Marine Renewable Energy (PRIMRE) is an interconnected system of knowledge hubs that provide access to data, information, and other resources for the marine energy community. | Making data more accessible and user-friendly for the public. | The systems leverages Retrieval-Augmented Generation to find semantically relevent content which the AI (LLM) summarizes for the end user as a method to describe relevent content within the data catalog. | 01/10/2024 | Developed in house | AWS, Azure, OpenAI | Yes | The systems leverages Retrieval-Augmented Generation to find semantically relevent content which the AI (LLM) summarizes for the end user as a method to describe relevent content within the data catalog. | As this is summarizing data from the PRIMRE data repository the related datasets are described by the catalog which is also used to validate the AI responses: https://openei.org/wiki/PRIMRE | https://mhkdr.openei.org/ | No | Yes | https://mhkdr.openei.org/ | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-490 | GitHub Copilot with the OpenAI Codex | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Agentic AI | Accelerate coding tasks with AI-assisted code suggestions and automation. | This increases their productivity, allows them to focus more on innovation and research, and accelerates the development of high-quality software solutions for scientific research. | This increases their productivity, allows them to focus more on innovation and research, and accelerates the development of high-quality software solutions for scientific research. | 01/10/2024 | Developed with both contracting and in-house resources | OpenAI | Yes | This increases their productivity, allows them to focus more on innovation and research, and accelerates the development of high-quality software solutions for scientific research. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | NA-IM (Enterprise) - Office of the Associate Administrator for Information Management and Chief Information Officer (HQ-Enterprise) (NNSA HQ) | DOE-491 | First Alert DataMinr AI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Emergency Management | Deployed | c) Not high-impact | Not high-impact | This use case is determined to be not high-impact base on definitoin in Section 5 of the OMB Memorandum M-25-21. | Natural Language Processing | combine the text of publicly available news sources about events that share the same time and geographic location | We use this AI service to provide spectial or situational awareness, incidents occuring around NNSA or DOE Site. Emergency reporting, monitoring, speeds up emergency operations reporting. The AI is being used in place of a large team that would be required for coding and development of emergency services facilitation. This would be a cost saving software for the Federal government. | Data aggregation and reflection. | 03/01/2025 | Purchased from a vendor | Dataminr | Yes | Data aggregation and reflection. | This information is proprietaty to DataMinr. No information or contribution comes from NNSA. | Dataminr - FirstAlert is not publicly availabe and is not required to be. | No | None of the above | No | N/A | ||||||||||
| Department Of Energy | SLAC - SLAC National Accelerator Laboratory (SC43 OIM) | DOE-492 | Yurts AI search function | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | This AI system does not meet the criteria of any of the six pillars that make up High Impact AI in Memorandum M-25-21 | Generative AI | Access to data in various silos | Easier access to appropriate data | Chat and search with ability to modify responses to fit the users needs (i.e. tone and formality) | 13/03/2024 | Developed with both contracting and in-house resources | Legion (Previously Yurts) | No | Chat and search with ability to modify responses to fit the users needs (i.e. tone and formality) | SLAC Internal Documentation and user prompts | No | No | |||||||||||||
| Department Of Energy | PPPL - Princeton Plasma Physics Laboratory (SC43 OIM) | DOE-493 | Interactive platform to help review and create "Promoting Inclusive and Equitable Research" Plans | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Retired | A trained and informed model that can help to review PIER plans for accuracy, consistency and also to help integrate specific PPPL DEIA goals and initiatives, aligned with input from the user, helping to clearly define the goals of the research plan. | AI output will be revision suggestions to a user submitted PIER plan in order to align with PPPL specific goals and initiatives. It will also help to guide the user to create a more consistent plan with previously submitted/approved plans. | No | AI output will be revision suggestions to a user submitted PIER plan in order to align with PPPL specific goals and initiatives. It will also help to guide the user to create a more consistent plan with previously submitted/approved plans. | PPPL specific PIER plan guidelines, previously submitted and approved PIER plans, public DOE guidance and other leadership data to help refine plans that align with laboratory strategic goals. | No | No | |||||||||||||||||||
| Department Of Energy | NREL - National Renewable Energy Laboratory (EE) | DOE-494 | Energy Wizard | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | The tool is currently only available internally and serves as a research tool that enables the discovery and evaluation of NREL published research. | Generative AI | The tool aims to explore and extract meaningful insights from NREL's vast database of publications including but not limited to technical reports, presentations, and conference papers. There are over 56,000 publications in the NREL research hub and t | Making NREL data more accessible for researchers and accelerating research. | The systems leverages Retrieval-Augmented Generation to find semantically relevent content which the AI (LLM) summarizes for the end user as a method to describe relevent content within the selected publications and research profiles. | 01/08/2024 | Developed in house | AWS, Azure, OpenAI | Yes | The systems leverages Retrieval-Augmented Generation to find semantically relevent content which the AI (LLM) summarizes for the end user as a method to describe relevent content within the selected publications and research profiles. | As this is summarizing data from the OEDI data repository the related datasets are described by the catalog which is also used to validate the AI responses: https://data.openei.org/ | https://data.openei.org | No | Yes | https://github.com/NREL/elm | |||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-495 | LANL AI Portal | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | General purpose business system | Generative AI | "Democratized access to open-source/open-weights Large Language Models (LLMs) for general purpose office productivity, research of AI models, software development, operational streamlining, code development " | Democratized access to open-source/open-weights Large Language Models (LLMs) for general purpose office productivity, research of AI models, software development, operational streamlining, code development | Interactive text chat replies from user prompts, summaries of documents user submitted for Retrieval Augmented Generation (RAG), Replies to API queries from enterprise and scientific applications | 06/01/2025 | Developed in house | Amazon Web Services (hosting provider) | Yes | Interactive text chat replies from user prompts, summaries of documents user submitted for Retrieval Augmented Generation (RAG), Replies to API queries from enterprise and scientific applications | Not trained in-house, using open-source/open-weights models. Reliant on model provider transparency | No | None of the above | Yes | https://github.com/vllm-project/vllm https://github.com/awslabs/LISA https://github.com/BerriAI/litellm | Yes | Positive impact on laboratory cost/time efficency by reducing compliance burden on internal teams | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |||
| Department Of Energy | IM-50 - Architecture Engineering Technology and Innovation (IM) | DOE-497 | SmartPD Creator | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | It does not meet the definition defined by OMB. | Generative AI | It improves the speed and accuracy for DOE employees to create position descriptions. | The time to hire much needed resources will be reduced and the process will be greatly improved. | Position descriptions for federal roles. | 02/09/2024 | Developed with both contracting and in-house resources | Accenture Federal Services, Google | Yes | Position descriptions for federal roles. | Google's Gemini familiy of models. | No | None of the above | No | ||||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-498 | ChatGPT Enterprise | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | General purpose business system | Generative AI | "General business productivity, research of AI models " | General business productivity, research of AI models | Interactive text chat replies from user prompts, summaries of open/public/unrestricted documents user submitted for Retrieval Augmented Generation (RAG) through CustomGPTs | 27/05/2024 | Purchased from a vendor | OpenAI (SAAS hosting provider) | No | Interactive text chat replies from user prompts, summaries of open/public/unrestricted documents user submitted for Retrieval Augmented Generation (RAG) through CustomGPTs | Not trained on any agency or LANL data | No | None of the above | No | Yes | Positive impact on laboratory cost/time efficency by making a market-leading research tool available to all LANL employees | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||
| Department Of Energy | ANL - Argonne National Laboratory (SC43 OIM) | DOE-499 | Argo | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Impact of system failure or erroneous results is negligible. | Generative AI | Broad usage across science and lab-ops for use cases that could benefit with genAI techniques. | Enables anyone within the Argonne community to leverage text-based generative AI with their Argonne-specific information and data, including sensitive research or operational data up to and including CUI. | Large language model responses (prediction-based) following user prompting. | 01/11/2023 | Developed in house | Yes | Large language model responses (prediction-based) following user prompting. | N/A – we are using existing pre-trained large language models, no training required. | No | No | Yes | Not applicable | Yes | Development of monitoring protocols is in-progress | Yes, sufficient and periodic training has been established | Yes | Not applicable | Direct usability testing | ||||||
| Department Of Energy | PPPL - Princeton Plasma Physics Laboratory (SC43 OIM) | DOE-500 | AI Chat Bot for Facility Sustainability Practices | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Retired | Interactive RAG chat bot model trained on facility recycling, composting and trashing guidelines to inform users how to handle niche cases for sustainably getting rid of unwanted items. This should reduce confusion when throwing items out and also increase the amount of properly recycled items at PPPL. | AI output will be recommendations and instructions based on training data from facility data on recycling, trash and composting | No | AI output will be recommendations and instructions based on training data from facility data on recycling, trash and composting | Facility documentation on proper recycling, trash and composting guidelines. Location data for areas where specific items can be thrown away. Dynamic training on publicly maintained websites to interpret updated guidelines within PPPL and in the complex, as well as data on upcoming events. | No | No | |||||||||||||||||||
| Department Of Energy | NREL - National Renewable Energy Laboratory (EE) | DOE-501 | AskGDR | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Tool is designed as an AI research assistant to help users find answers to questions about specific datasets beyond simple keyword searches. | Generative AI | AskGDR serves as a virtual research assistant to GDR users. It provides answers to a variety of user-provided questions using natural language processing and generative machine learning. Users can get answers to questions about specific datasets. | Making data more accessible and user-friendly for the public. | The systems leverages Retrieval-Augmented Generation to find semantically relevent content which the AI (LLM) summarizes for the end user as a method to describe relevent content within the data catalog. | 01/10/2024 | Developed in house | AWS, Azure, OpenAI | Yes | The systems leverages Retrieval-Augmented Generation to find semantically relevent content which the AI (LLM) summarizes for the end user as a method to describe relevent content within the data catalog. | As this is summarizing data from the GDR data repository the related datasets are described by the catalog which is also used to validate the AI responses: https://gdr.openei.org/ | https://gdr.openei.org/ | No | Yes | ||||||||||||
| Department Of Energy | PA HQ - Office of Public Affairs (PA) | DOE-502 | Topic Modeling for Energy.gov | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Because it doesn't impact an individual or entity's civil rights, civil liberties, or privacy; or an individual or entity's access to education, housing, insurance, credit, employment, and other programs; an individual or entity's access to critical | Minimize time spent through manual effort of reading and tagging 100,000 Energy.gov webpages. | A list of five tags that best categorize an Energy.gov webpage. | Yes | A list of five tags that best categorize an Energy.gov webpage. | Web content from Energy.gov is being used for this use case. | |||||||||||||||||||||
| Department Of Energy | Y-12 - Consolidated Nuclear Security Y-12 (YFO) | DOE-503 | Boston Dynamics Spot Robotics | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Emergency Management | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Obstacle avoidance without human intervention | Automation of the robots movements | Development and testing of robotic use for security and emergency responses, helping to decide on the "best" path for the robot to move | 05/07/2025 | Purchased from a vendor | Boston Dynamics | Yes | Development and testing of robotic use for security and emergency responses, helping to decide on the "best" path for the robot to move | No | None of the above | Yes | ||||||||||||||
| Department Of Energy | NA-IM (Enterprise) - Office of the Associate Administrator for Information Management and Chief Information Officer (HQ-Enterprise) (NNSA HQ) | DOE-504 | DNA-P Use Cases Leaverging Artificial Intellegence (Pilot) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | This use case is determined to be not high-impact base on definitoin in Section 5 of the OMB Memorandum M-25-21. | Other | "- identify data clusters/trend analysis - identify data discrepencies/data enrichment - generate suggestions (including generating reports, data linkages, and courses of action) - generate graphical and natural language analyses" | "-save time for DOE DNA-P users - improve DOE/NNSA data quality - improve DOE/NNSA safety operations" | "- data clusters - summaries - recommendations (data linkages, data entries, and courses of action) - graphical and natural language analyses - extracted entities -responses to queries via RAG workflows" | 01/04/2024 | Developed in house | Palantir | Yes | "- data clusters - summaries - recommendations (data linkages, data entries, and courses of action) - graphical and natural language analyses - extracted entities -responses to queries via RAG workflows" | "- No custom models developed - AI use cases have been deployed on publicly available information as well as agency provided data" | No | PIA not publically available | None of the above | No | PIA not publically available | ||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-506 | Advanced Peer to Peer Transactive Energy Platform with Predictive Optimization | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Applying AI/ML to optimize renewable energy generation and consumption on Smart Grid with block chain technologies | Applying AI/ML to optimize renewable energy generation and consumption on Smart Grid with block chain technologies | Applying AI/ML to optimize renewable energy generation and consumption on Smart Grid with block chain technologies | Applying AI/ML to optimize renewable energy generation and consumption on Smart Grid with block chain technologies | ||||||||||||||||||||
| Department Of Energy | NR HQ - Naval Reactors (NR) | DOE-507 | ServiceNow Virtual Agent Natural Language Understanding | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | System is used as part of IT Service Management and does not meet the criteria outlined in section 5 of OMB Memorandum M-25-21. | Natural Language Processing | Aid in users receiving better IT support for incident reporting and service delivery. | Quicker and easier access for users to access pre-built IT Service Management incident and request templates. | Pre-built IT Service Management incident and request templates. The Virtual Agent NLU is only used to understand the users intent and entity, where it then performs a search of the service catalog to return the most relevant result. | 30/06/2025 | Purchased from a vendor | ServiceNow | Yes | Pre-built IT Service Management incident and request templates. The Virtual Agent NLU is only used to understand the users intent and entity, where it then performs a search of the service catalog to return the most relevant result. | The NLU is provided common phrases to recognize related to opening a ticket, closing a ticket, checking the status of a ticket, updating a ticket, searching a knowleddge article, and connecting with a live agent. | No | No | |||||||||||||
| Department Of Energy | SLAC - SLAC National Accelerator Laboratory (SC43 OIM) | DOE-508 | ServiceNow Predictive Intelligence | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | This AI system does not meet the criteria of any of the six pillars that make up High Impact AI in Memorandum M-25-21 | Classical/Predictive Machine Learning | Reduce error rate of categorization of incidents in ServiceNow | Reduction of errors in the categorization of incidents | Predictions of categorization of incidents | Predictions of categorization of incidents | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-510 | Microsoft Bing Service | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-511 | Ariculate 360 AI Assistant | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | Purchased from a vendor | Articulate | No | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Applicable | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | |||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-512 | Microsoft Azure Quantum Elements | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Accelerate chemistry and materials discovery using AI, HPC, and quantum-ready tools. | Speeds discovery of new materials and chemicals for energy solutions. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | Microsoft Corporation | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-513 | Nanoparticle growth kinetics and mechanism | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | AI is used to digitalize the nanoparticle from TEM images. Morphology and crystalline feature will be made available through AI. Combining with kinetics modeling, AI detects critical material transformation event. The CFN TEM facility will be enhance | AI is used to digitalize the nanoparticle from TEM images. Morphology and crystalline feature will be made available through AI. Combining with kinetics modeling, AI detects critical material transformation event. The CFN TEM facility will be enhanced by AI pipelines. | AI is used to digitalize the nanoparticle from TEM images. Morphology and crystalline feature will be made available through AI. Combining with kinetics modeling, AI detects critical material transformation event. The CFN TEM facility will be enhanced by AI pipelines. | AI is used to digitalize the nanoparticle from TEM images. Morphology and crystalline feature will be made available through AI. Combining with kinetics modeling, AI detects critical material transformation event. The CFN TEM facility will be enhanced by AI pipelines. | ||||||||||||||||||||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-514 | Bernie-AI: Infrastructure Planning Support POC | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Not being used to make critical runtime decisions. | Generative AI | Quick access to specific building, and major asset information to be used for planning and predictive maintenance purposes | Ability to quickly find and access building and asset information to support planning activities | Specific, data-driven answers to building and major asset use and impacts | 30/06/2025 | Developed in house | No | Specific, data-driven answers to building and major asset use and impacts | Building, asset, maintenance data | No | No | None of the above | Yes | N/A | In-Progress | Not applicable | Not applicable | Other | |||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-516 | Center for Mesoscale Transport Properties | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Center for Mesoscale Transport Properties | Center for Mesoscale Transport Properties | Center for Mesoscale Transport Properties | Center for Mesoscale Transport Properties | ||||||||||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-517 | Develop a Machine Learning Framework for Optimal Computational Campaigns for Complex Uncertain Systems | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | 'Two projects to use machine learning to acclerate the design of optimal strategies and robust computational campaigns for complex systems in the presence of substantial data and model uncertainty, and/or which have processes that span multiple scale | 'Two projects to use machine learning to acclerate the design of optimal strategies and robust computational campaigns for complex systems in the presence of substantial data and model uncertainty, and/or which have processes that span multiple scales' | 'Two projects to use machine learning to acclerate the design of optimal strategies and robust computational campaigns for complex systems in the presence of substantial data and model uncertainty, and/or which have processes that span multiple scales' | 'Two projects to use machine learning to acclerate the design of optimal strategies and robust computational campaigns for complex systems in the presence of substantial data and model uncertainty, and/or which have processes that span multiple scales' | ||||||||||||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-518 | Microsoft Co-Pilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | This AI system does not provide a principle basis for decisions or actions with legal, material binding, or have any significat effects of those items listed in OMB-25-21 in the definition of High Impact. | Generative AI | Better analytics of data in our microsoft applications | Better analytics of data in our microsoft applications | text answers to input questions | 01/09/2025 | Purchased from a vendor | Microsoft | Yes | text answers to input questions | Pre-trained from OpenAI and access to employees microsoft data sources | Not applicable | No | Not applicable | None of the above | No | Not applicable | Yes | Not applicable | Potential impacts were assessed by the Hanford AI SME | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | Direct usability testing |
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-519 | Visual Studio Enterprise | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | 01/02/2023 | Purchased from a vendor | Microsoft | No | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||||
| Department Of Energy | EE HQ - EE Headquarters (EE) | DOE-520 | NLP Data Analytics for Program and Portfolio Insights | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Produces analytic insights only; does not affect rights, benefits, or binding decisions. | Natural Language Processing | Improve analytical capabilities and self service to organizational data. Increase process efficiency and analytical insights. | Faster and more consistent analysis, earlier risk detection, enhanced decision-making. | Dashboards, SQL Query results, structured datasets, extracted keywords, sentiment, thematic trends. | Dashboards, SQL Query results, structured datasets, extracted keywords, sentiment, thematic trends. | ||||||||||||||||||||
| Department Of Energy | Pantex - PanTeXas Deterrence Pantex (PFO) | DOE-521 | Command Media | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | The AI output is not a basis for decisions or actions. It is for information retrieval. | Generative AI | The AI is intended to solve the inefficiency of plant personnel who have questions about policies, procedures, and work instructions by providing a more direct way to find information. | The expected benefit is increased effiency for plant personnel. | Questions posed by plant personnel are answered by leveraging the knowledge corpus and providing answers in natural language text and links to relevent documents. | Developed in house | No | Questions posed by plant personnel are answered by leveraging the knowledge corpus and providing answers in natural language text and links to relevent documents. | ||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-522 | Ask CAS | Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | GenAI-RAG to support issues management | Productivity Tool | Text | 01/10/2024 | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-523 | AI/ML in High Energy Physics Research | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Within the high energy physics research in the energy, intensity and cosmic frontiers, as well as the advance detector R&D and scientific computing, AI/ML techniques have been developed and applied to solving a variety of problems in studying particl | Within the high energy physics research in the energy, intensity and cosmic frontiers, as well as the advance detector R&D and scientific computing, AI/ML techniques have been developed and applied to solving a variety of problems in studying particle physics and cosmology. | Within the high energy physics research in the energy, intensity and cosmic frontiers, as well as the advance detector R&D and scientific computing, AI/ML techniques have been developed and applied to solving a variety of problems in studying particle physics and cosmology. | Within the high energy physics research in the energy, intensity and cosmic frontiers, as well as the advance detector R&D and scientific computing, AI/ML techniques have been developed and applied to solving a variety of problems in studying particle physics and cosmology. | ||||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-524 | ESH&Q NEPA App | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Used for general purpose generative AI within a COTS product. | Other | Not available | Speeds up environmental review processes and improves compliance accuracy, reducing delays in project approvals. | Information summaries | Developed with both contracting and in-house resources | Not available | No | Information summaries | No | No | Yes | Not available | |||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-525 | Microsoft OneNote | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-526 | OpenAI ChatGPT Pilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | General chatbot knowledge and capabilities | Generative AI | Quick access to general knowledge. | Ability to quickly find and access general knowledge and business productivity | General answers to questions, summarized documentation, document creation, general research. | 30/06/2025 | Purchased from a vendor | OpenAI | No | General answers to questions, summarized documentation, document creation, general research. | Not involved in training | No | No | None of the above | No | N/A | In-Progress | Not applicable | Not applicable | Other | ||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-527 | ACORN (Autonomous Operation for Reactor Technologies) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | currently focusing on reactor simulator or auxilary moderator displacement rod with minimal impacts on reactor safety | Classical/Predictive Machine Learning | AI automatically identifies process models from simulation and operational data, solves for optimal control actions that can achieve user-defined objectives, executes actions and observes system responses | reduce labor and costs to performance operation tasks in advanced reactors and microreactors | optimal control actions | 01/09/2023 | Developed with both contracting and in-house resources | Open source development | Yes | optimal control actions | Time series data from sensors or simulation results | Not available | No | Yes | Not available | |||||||||||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-529 | NIF Shot Analytics & Predictive Maintenance Support Pilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Not being used to make critical runtime decisions. | Classical/Predictive Machine Learning | Quick access to specific NIF laser data and problem resolution information | Ability to quickly find and access targeted NIF maintenance knowledge | Specific, data-driven answers to NIF shot maintenance questions | 30/06/2025 | Purchased from a vendor | C3 | No | Specific, data-driven answers to NIF shot maintenance questions | NIF shot data and support ticket information | No | No | None of the above | No | N/A | In-Progress | Not applicable | Not applicable | Other | ||||||
| Department Of Energy | KCNSC - Kansas City National Security Campus (KCFO) | DOE-530 | Merlin - KCNSC Generative AI with RAG | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Output does not serve as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety. Assists in developing software code that may accept product, but this AI will not make those decision | Generative AI | Intended to address the opportunity to enhance overall productivity. No definite 'problem' being solved, just capitalizing on an opportunity to leverage industry investment in generative AI broadly. | It serves as a general productivity enhancer -not driven by a specific problem, but by the opportunity to improve efficiency. | outputs are context-aware responses that combine generated content with retrieved, authoritative information to ensure accuracy, relavance, and grounding. | 16/07/2025 | Developed in house | N/A - Open Source Integration | Yes | outputs are context-aware responses that combine generated content with retrieved, authoritative information to ensure accuracy, relavance, and grounding. | Leveraging OpenAI open-sourced models, so they are responsible for providing training data | No | None of the above | No | https://huggingface.co/openai/gpt-oss-120b | |||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-531 | IDAES-PSE | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Offers extensive process systems engineering (PSE) capabilities for optimizing the design and operation of complex, interacting technologies and systems. | Opitmize the design and operation of complex, interacting technologies and systems. | Models | Models | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-532 | NRAP-Open-IAM | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Enables quantification of containment effectiveness and leakage risk at carbon storage sites in the context of system uncertainties and variability. | Enables quantification of effectiveness and risk. | Data | Data | ||||||||||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-533 | SMMM | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | AI/ML is being used to evaluate measurements in real-time during simultaneous experiments on two beamlines and then drive subsequent data collection on both of the beamlines to maximize the scientific value generated per time. | AI/ML is being used to evaluate measurements in real-time during simultaneous experiments on two beamlines and then drive subsequent data collection on both of the beamlines to maximize the scientific value generated per time. | AI/ML is being used to evaluate measurements in real-time during simultaneous experiments on two beamlines and then drive subsequent data collection on both of the beamlines to maximize the scientific value generated per time. | AI/ML is being used to evaluate measurements in real-time during simultaneous experiments on two beamlines and then drive subsequent data collection on both of the beamlines to maximize the scientific value generated per time. | ||||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-534 | Custom GenAI for eVinci Microreactor Engineering (MauroGPT) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | a) High-impact | High-impact | Generative AI | Reactor engineers spending extensive time manually iterating through documents during the reactor engineering process | Reactor engineers save time by using the AI to quickly find answers and relevant source documents. | Answers to reactor design and engineering questions with citations back to source documents. | 04/08/2025 | Developed with both contracting and in-house resources | Open source development with INL Advanced Analystics Center of Excellence | Yes | Answers to reactor design and engineering questions with citations back to source documents. | Advanced reactor engineering schematics | Not available | No | No | Not available | ||||||||||||
| Department Of Energy | NR HQ - Naval Reactors (NR) | DOE-535 | AI Builder Document Scraping | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | AI Builder scrapes PDF files for text. The end user is responsible for verifying the quality of the final text. This does not meet the criteria outlined in section 5 of OMB Memorandum M-25-21. | Computer Vision | Enhance and automate PDF scraping for large sets of PDF files. | Improved Microsoft PDF scraping model that allows the user to provide a training set for their PDFs. | Text and file output in Microsoft applications. | 03/02/2025 | Purchased from a vendor | Microsoft | Yes | Text and file output in Microsoft applications. | Microsoft AI Builder has a built in PDF scraping model and learns from a set of PDF files where the location of data exists within the layout of PDF. | No | No | |||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-536 | Microsoft Azure Authoring Tools | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NREL - National Renewable Energy Laboratory (EE) | DOE-537 | WETO SA | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | Not intended to produce outputs that are used as principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety | Generative AI | Assess the developable quantity and quality of the renewable resources. | It seeks to better understand the uncertainty and impact of siting considerations and to understand when and where wind technological innovation may help to overcome potential land barriers. | Robust surrogate models to deliver meaningful insights across a broad set of technology innovations and site characteristics while overcoming computational challenges | 01/10/2024 | Developed in house | Azure, OpenAI | Yes | Robust surrogate models to deliver meaningful insights across a broad set of technology innovations and site characteristics while overcoming computational challenges | No | No | ||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-538 | AI Assistant Phase 2 Simple Chat | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Doesn't meet the criteria. | Generative AI | Emplyees at ORNL more productive | Enhanced employee productivity | Natural language responses based on a wide variety of file and text based input. | 04/08/2025 | Developed in house | Yes | Natural language responses based on a wide variety of file and text based input. | Open AI training data | No | Yes | None of the above | Yes | yes | |||||||||||
| Department Of Energy | LM HQ - Office of Legacy Management (LM) | DOE-539 | Drone Imagery Analysis | Pilot – The use case has been deployed in a limited test or pilot capacity. | Energy & the Environment | Pilot | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | AI used to allow drone to improve human safety in imagery collection and analysis. | Simple and efficient asset inspection with improved safety factor. | Automatically seam together images for visualization by humans. | 01/10/2022 | Purchased from a vendor | TBD | No | Automatically seam together images for visualization by humans. | Human review will be used to validate. | No | None of the above | No | In-Progress | Improved management of the remedy | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Yes | Not applicable | Direct usability testing | ||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-540 | OpenAI Enterprise | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | This AI use case does not serve as a principal basis for decisions or actions with legal, material, and binding | Generative AI | Provide enterprise-grade AI assistance with secure access to OpenAI GPT models. | Provides secure, reliable access to advanced AI capabilities for research. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | OpenAI | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-541 | VIPER (Visualizaiton for Predictive Maintenance Recommendation) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | VIPER presents comprehensive system health diagnostics, explainability metrics, and actionable recommendations to system engineers at nuclear power plants, enabling informed decision-making through an easy-to-use visualization interface. A multi-mode | reduce labor and costs to performance maintenance tasks in existing light water reactors | system diagnostic and prognostic results, system decription and root cause explanations | 01/09/2024 | Developed with both contracting and in-house resources | Open Source development | Yes | system diagnostic and prognostic results, system decription and root cause explanations | sensor data from Salem and Hope Creek nuclear power plants, operated by PSEG NRC, EPRI, and INL public reports | Not available | No | Yes | Not available | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-543 | Decisions AI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Enhance meeting management by integrating AI insights with Decisions platform. | Enhances meeting effectiveness and decision-making outcomes. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | OpenAI | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-544 | Poseidon | Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Other | Content analysis across domains and structured/unstructured content for SCRM | Productivity Tool | Text | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |||||||||
| Department Of Energy | LM HQ - Office of Legacy Management (LM) | DOE-545 | Soil Moisture Modeling | Pilot – The use case has been deployed in a limited test or pilot capacity. | Energy & the Environment | Pilot | c) Not high-impact | Not high-impact | This AI use case does not serve as a principal basis for decisions or actions with legal, material, and binding | Classical/Predictive Machine Learning | Machine learning solves issues with soil moisture by enhancing accuracy and efficiency in data analysis | The ability to determine evapotranspiration rates on disposal cell cover using publicly available data from satellites. | Multi-layer soil moisture model/prediction | 01/02/2022 | Purchased from a vendor | University of Montana | No | Multi-layer soil moisture model/prediction | Data is held back from the models to validate model outputs. | No | None of the above | Yes | Yes | Improved management of the remedy | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Yes | Not applicable | Direct usability testing | ||||
| Department Of Energy | NR HQ - Naval Reactors (NR) | DOE-546 | Azure Document Intelligence | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Azure Document Intelligence scrapes PDF files for text. The end user is responsible for verifying the quality of the final text/product. This does not meet the criteria outlined in section 5 of OMB Memorandum M-25-21. | Computer Vision | Enhance and automate PDF scraping for large sets of PDF files. | Improved Microsoft PDF scraping model that allows the user to provide a training set for their PDFs. | Text and file output in Microsoft applications or Azure Synapse Datalake | Text and file output in Microsoft applications or Azure Synapse Datalake | ||||||||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-547 | AI-Enhanced Hub | Pre-deployment – The use case is in a development or acquisition status. | Human Resources | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Enhance staff matching and profiles with AI-driven HUB search capabilities. | Enhances collaboration and expertise matching across the lab. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | EE HQ - EE Headquarters (EE) | DOE-548 | OpenText for Records Management (Email Auto-Classification) | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not fall within the requirements for high impact. | ||||||||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-549 | Apple Intelligence | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | 28/10/2024 | Purchased from a vendor | Apple | No | Proprietary/unknown data set used for model training. | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | |||||
| Department Of Energy | SRS - SRNL - Savannah River Site - Savannah River National Laboratory (EM) | DOE-550 | CURIE - Conversational Unified Research Information Engine | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Not used for organizational outcomes or work products | Generative AI | Turning unstructured questions, tasks, or ideas into structured outcomes | increasing productivity, enhance decision-making | Textual outputs | Textual outputs | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-551 | Microsoft ScreenSketch | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | Yes | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-552 | Microsoft Visual C++ Additional Runtime | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-553 | A Digital Twin for In-silico Spatiotemporally-resolved Experiments | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | A Digital Twin for In-silico Spatiotemporally-resolved Experiments | A Digital Twin for In-silico Spatiotemporally-resolved Experiments | A Digital Twin for In-silico Spatiotemporally-resolved Experiments | A Digital Twin for In-silico Spatiotemporally-resolved Experiments | ||||||||||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-554 | Machine learning for accelerated understanding of dynamic catalysis | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | The understanding of catalytic reactions has been a long-standing challenge due to the complexity and wide range of time scales involved in their mechanisms. There remain significant gaps in the understanding of how the catalyst's atomistic structure | The understanding of catalytic reactions has been a long-standing challenge due to the complexity and wide range of time scales involved in their mechanisms. There remain significant gaps in the understanding of how the catalyst's atomistic structure determines the activity of reactions and how it is transiently transformed under varying operating conditions. The proposed effort seeks to take on this challenge with a data-science-driven approach to computational modeling, joining it with advanced experimental methods of characterization to create new methods for capturing realistic complexity of reactions at heterogenous and disordered interfaces. As a prototype application, we will focus on the water gas shift reaction (WGSR) CO + H2O → CO2 +H2, as carried out over an active oxide (ceria CeO2) supported nanoscale Pt cluster catalyst. The Pt/CeO2 system is a high activity, low temperature catalyst for WGSR in which transient catalyst reconstructions and fluxional oscillating behavior of active sites at the metal-support interface play an essential role. | The understanding of catalytic reactions has been a long-standing challenge due to the complexity and wide range of time scales involved in their mechanisms. There remain significant gaps in the understanding of how the catalyst's atomistic structure determines the activity of reactions and how it is transiently transformed under varying operating conditions. The proposed effort seeks to take on this challenge with a data-science-driven approach to computational modeling, joining it with advanced experimental methods of characterization to create new methods for capturing realistic complexity of reactions at heterogenous and disordered interfaces. As a prototype application, we will focus on the water gas shift reaction (WGSR) CO + H2O → CO2 +H2, as carried out over an active oxide (ceria CeO2) supported nanoscale Pt cluster catalyst. The Pt/CeO2 system is a high activity, low temperature catalyst for WGSR in which transient catalyst reconstructions and fluxional oscillating behavior | The understanding of catalytic reactions has been a long-standing challenge due to the complexity and wide range of time scales involved in their mechanisms. There remain significant gaps in the understanding of how the catalyst's atomistic structure determines the activity of reactions and how it is transiently transformed under varying operating conditions. The proposed effort seeks to take on this challenge with a data-science-driven approach to computational modeling, joining it with advanced experimental methods of characterization to create new methods for capturing realistic complexity of reactions at heterogenous and disordered interfaces. As a prototype application, we will focus on the water gas shift reaction (WGSR) CO + H2O → CO2 +H2, as carried out over an active oxide (ceria CeO2) supported nanoscale Pt cluster catalyst. The Pt/CeO2 system is a high activity, low temperature catalyst for WGSR in which transient catalyst reconstructions and fluxional oscillating behavior | ||||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-555 | Computer Vision for Defect Detection | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | The AI use case focuses on product quality control and does not significantly affect legal, material, or critical access to services rights. | Computer Vision | "To automate and enhance the detection of visible defects in products, improving quality control and reducing production errors." | Improved product quality, reduced production costs, less need for manual inspections, and enhanced productivity. The AI system will lead to significant cost savings and better consistency in products over time. | "The AI system outputs will include identified defects in product images, which will then be reviewed and verified by human operators." | 01/10/2024 | Developed in house | SRNS - OT In House Staff | "The AI system outputs will include identified defects in product images, which will then be reviewed and verified by human operators." | |||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-556 | Microsoft Project | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-558 | HeyGen | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Explore the generation of engaging internal training videos using AI avatars. | Makes internal training more engaging and effective. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Purchased from a vendor | HeyGen | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-559 | Nuclear Safety Analysis | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | a) High-impact | High-impact | Other | Safety, risk, and reliability analysis is performed by reactor designers, developers, and plant operators to ensure safe design and operation of the reactor and the plant; and is required by the regulators as part of license application. Performing s | The outcome of this effort will provide a tool to the safety analysis teams which will enable them to automate creating the risk models resulting in significant reduction in time spent on performing safety analysis, writing SARs, and conducting the regulatory review of safety case. | Failure Modes and Effects Analysis Fault tree analysis Creating risk assessment models | Developed with both contracting and in-house resources | Not available | No | Failure Modes and Effects Analysis Fault tree analysis Creating risk assessment models | Not available | No | Yes | Not available | ||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-56 | CrewAI | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | No high-impact category applies | Agentic AI | Automate and orchestrate workflows across LLMs and cloud platforms. | Boosts efficiency by automating complex workflows across platforms. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/01/2025 | Purchased from a vendor | CrewAI | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-560 | AI for QA Audit | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Improve quality assurance at different levels of the Lab. | Improved quality assurance | Reccomendation to existing QA audit submissions | Reccomendation to existing QA audit submissions | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-561 | Google Chrome Generative AI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not meet the criteria for High-Impact. Unless explicitly deployed in a safety-critical or classified environment, it should be considered Not High Impact. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 06/12/2023 | Purchased from a vendor | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | |||
| Department Of Energy | IM-50 - Architecture Engineering Technology and Innovation (IM) | DOE-562 | PWS Builder | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | Improved speed to create a performance work statement document. | DOE employees would be able to quickly and accurately draft performance work statements for projects both net-new and in-flight. This would greatly reduce the time needed to get a project up and running. | Performance work statements. | 01/10/2024 | Developed with both contracting and in-house resources | Accenture Federal Services, Google | Yes | Performance work statements. | Google's Gemini family of models. | No | None of the above | No | ||||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-563 | ServiceNow Classification Prediction | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | It does not meet the definition outlined by OMB. | Classical/Predictive Machine Learning | Inconsistencies in classification values determined by human technicians | Improved automation | Prediction | 01/11/2024 | Purchased from a vendor | ServiceNow | Yes | Prediction | Existing LANL IT ticket data stored in GCC Data Center certified as FedRAMP High. Training servers in same environment. | No | None of the above | No | Yes | Minimal impact as AI is used purely for predictive purposes to aid trained technicians. Humans remain the decision makers whether to use the AI's output. | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Yes, an appropriate appeal process has been established | Other | ||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-564 | Safeguards Digital Twin | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | ServiceNow's AI output is NOT a principal driver of a government decision that would meaningfully affects people's rights, safety, critical access, or strategic assets. AI in ServiceNow is strictly used to help manage IT resources for LANL. | Classical/Predictive Machine Learning | This deals with international safeguars approaches. | Reduce the burden for inspectors by increasing synthesizing data and announcing anomalies. | Flag for when off-normal operations is detected along with expected material generated. | Developed with both contracting and in-house resources | Not available | No | Flag for when off-normal operations is detected along with expected material generated. | Reactor physics data from Serpent | Not available | No | Yes | Not available | ||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-565 | Use AI/ML ML to Optimize Data and Experiments at National Synchrotron Light Source II (NSLS-II) and the Accelerator Test Facility (ATF) | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | This deals with international safeguars approaches. | Classical/Predictive Machine Learning | Three projects: use ML for denoising in scientific images; use ML to mine large quantities of data for automated evaluation of data quality and predictive analysis; develop AI/ML infrastructure to tune / align / optimize instruments/beamlines at NSL | Three projects: use ML for denoising in scientific images; use ML to mine large quantities of data for automated evaluation of data quality and predictive analysis; develop AI/ML infrastructure to tune / align / optimize instruments/beamlines at NSLS-II and ATF | Three projects: use ML for denoising in scientific images; use ML to mine large quantities of data for automated evaluation of data quality and predictive analysis; develop AI/ML infrastructure to tune / align / optimize instruments/beamlines at NSLS-II and ATF | Three projects: use ML for denoising in scientific images; use ML to mine large quantities of data for automated evaluation of data quality and predictive analysis; develop AI/ML infrastructure to tune / align / optimize instruments/beamlines at NSLS-II and ATF | ||||||||||||||||||||
| Department Of Energy | KCNSC - Kansas City National Security Campus (KCFO) | DOE-567 | Tabnine AI Pair Programmer | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | AI provides structured recommendations only; human reviewers retain authority for evaluation decisions | Generative AI | Accelerate and simplify software development across our entire site. There is no definite 'problem' being solved, just capitalizing on an opportunity to leverage industry investment in generative AI for software development | This use case will boost engineering velocity, code quality, and developer happiness by automating the coding workflow through AI tools customized to our teams. | Expediated quality software for Test Engineering | 16/07/2025 | Purchased from a vendor | Tabnine | Yes | Expediated quality software for Test Engineering | Tabnine is responsible for model training, containerization, and updates to their software | No | None of the above | No | ||||||||||||
| Department Of Energy | IM-60 - IM Enterprise Operations and Shared Services (IM) | DOE-568 | AI-Based Chat Bot | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Output does not serve as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety. Assists in developing software code that may accept product, but this AI will not make those decision | ||||||||||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-569 | Ask Alan | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | GenAI/AI interactive learning | Productivity Tool | Text | 01/09/2025 | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | Y-12 - Consolidated Nuclear Security Y-12 (YFO) | DOE-57 | Elastic Stack Technology (ELK) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | No high-impact category applies | Classical/Predictive Machine Learning | Increase searchability of documents for information pertinent to the mission | Enable intelligent media cataloging to assist with data discovery | Intelligent collection content searching using ElasticSearch, Logstash, and Kibana | 01/11/2022 | Purchased from a vendor | Elastic | Yes | Intelligent collection content searching using ElasticSearch, Logstash, and Kibana | Knowledge Preservation Management (KPM) Media | No | None of the above | Yes | ||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-570 | Data-Science Enabled, Robust and Rapid MeV Ultrafast Electron Diffraction System to Characterize Materials Including for Quantum and Energy Applications | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Data-Science Enabled, Robust and apid MeV Ultrafast Electron Diffraction System to Characterize Materials Including for Quantum and Energy Applications | Data-Science Enabled, Robust and apid MeV Ultrafast Electron Diffraction System to Characterize Materials Including for Quantum and Energy Applications | Data-Science Enabled, Robust and apid MeV Ultrafast Electron Diffraction System to Characterize Materials Including for Quantum and Energy Applications | Data-Science Enabled, Robust and apid MeV Ultrafast Electron Diffraction System to Characterize Materials Including for Quantum and Energy Applications | ||||||||||||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-571 | GitHub Co-Pilot | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | This AI system does not provide a principle basis for decisions or actions with legal, material binding, or have any significat effects of those items listed in OMB-25-21 in the definition of High Impact. | Generative AI | quicker, more complete, and safer code development | quicker, more complete, and safer code development | text answers to input questions, code suggestions | 25/08/2025 | Purchased from a vendor | GitHub | Yes | text answers to input questions, code suggestions | Pretrained AI's with access to GitHub code repositiories | Not applicable | No | Not applicable | None of the above | No | Not applicable | Yes | Not applicable | Potential impacts were assessed by the Hanford AI SME | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | Direct usability testing |
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-572 | Use ML as part of an integrated strategy for forecasting renewable energy resources | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | This is an inderdisciplinary project aiming to develop a novel system that transforms the forecasting of renewal energy resources by seamlessly integrating a numerical weather prediction model, ML, and measurements. | This is an inderdisciplinary project aiming to develop a novel system that transforms the forecasting of renewal energy resources by seamlessly integrating a numerical weather prediction model, ML, and measurements. | This is an inderdisciplinary project aiming to develop a novel system that transforms the forecasting of renewal energy resources by seamlessly integrating a numerical weather prediction model, ML, and measurements. | This is an inderdisciplinary project aiming to develop a novel system that transforms the forecasting of renewal energy resources by seamlessly integrating a numerical weather prediction model, ML, and measurements. | ||||||||||||||||||||
| Department Of Energy | NA-IM (Enterprise) - Office of the Associate Administrator for Information Management and Chief Information Officer (HQ-Enterprise) (NNSA HQ) | DOE-573 | DNA-P Use Cases Leaverging Artificial Intellegence (Pre-Development) | Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | This use case is determined to be not high-impact base on definitoin in Section 5 of the OMB Memorandum M-25-21. | Other | "- identify data clusters/trend analysis - identify data discrepencies/data enrichment - generate suggestions (including generating reports, data linkages, and courses of action) - generate graphical and natural language analyses" | "-save time for DOE DNA-P users - improve DOE/NNSA data quality - improve DOE/NNSA safety operations" | "- data clusters - summaries - recommendations (data linkages, data entries, and courses of action) - graphical and natural language analyses - extracted entities -responses to queries via RAG workflows" | "- data clusters - summaries - recommendations (data linkages, data entries, and courses of action) - graphical and natural language analyses - extracted entities -responses to queries via RAG workflows" | ||||||||||||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-574 | M365 Copilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | b) Presumed high-impact, but determined not high-impact | Not high-impact | The M365 Copilot and it's variants are integrated into all LANL collaboration and productivity services within M365 including email and Teams. | Generative AI | Collaboration and Productivity improvements | Automation of routine tasks such as email drafting, meeting summaries, and document generation. Improved efficiency in the use of mundane tasks. | Contextual responses, action results, agentic orchestration for Copilot Studio, apply templates, and a host of outputs depending on the M365 App it is being used with. | 01/07/2025 | Purchased from a vendor | Microsoft | Yes | Contextual responses, action results, agentic orchestration for Copilot Studio, apply templates, and a host of outputs depending on the M365 App it is being used with. | Pre-trained on public and licensed data but NOT retrained in the GCC content. The latest trainded data for the LLM was October 2023. | No | None of the above | No | Yes | M365 Copilot and it's variants are collaboration, productivity, and coding tools that will provide efficiency and speed to the delivery of common work tasks. | In-Progress | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Not applicable | Other | ||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-575 | EDMS Admin | Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Used for records managment functions. | Generative AI | Machine-readable information is difficult to quickly summarize by humans. By enabling summaries of information in our electronic document management system we have a previously-unavailable capability that addresses this problem. | Reduces administrative burden through intelligent automation of routine tasks. | Information summaries | 02/09/2025 | Developed with both contracting and in-house resources | Not available | No | Information summaries | Not available | No | Yes | Not available | ||||||||||||
| Department Of Energy | IM-50 - Architecture Engineering Technology and Innovation (IM) | DOE-576 | EnerGPT Canvas | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | It does not meet the definition defined by OMB. | Generative AI | Improves the speed and accuracy DOE employees are able to draft, edit, and produce content. | Allowing DOE users to edit their projects using AI in one single platform. | New content, edits to existing content, code, etc. | 06/08/2025 | Developed with both contracting and in-house resources | Accenture Federal Services, Google | Yes | New content, edits to existing content, code, etc. | Google's Gemini family of models. | No | None of the above | No | ||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-577 | Microsoft Copilot | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-578 | Project Optimus - Prime Contract | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Used as a general/business chat agent for genAI. | Other | Not available | Improves contract lifecycle efficiency and compliance tracking. | Contract citations, abstracts and other summaries. | Developed with both contracting and in-house resources | Not available | No | Contract citations, abstracts and other summaries. | No | No | Yes | Not available | |||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-579 | MR-DT | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | b) Presumed high-impact, but determined not high-impact | Not high-impact | This deals with international safeguars approaches. | Classical/Predictive Machine Learning | Aid safeguards analysts for determining if a reactor is being used in a non-declared way. | Reduce the burden for inspectors by increasing synthesizing data and announcing anomalies. | Flag for when off-normal operations is detected along with expected material generated. | Flag for when off-normal operations is detected along with expected material generated. | ||||||||||||||||||||
| Department Of Energy | Y-12 - Consolidated Nuclear Security Y-12 (YFO) | DOE-58 | Raytheon Multimedia Monitoring System (M3S) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | No high-impact category applies | Natural Language Processing | Video transcript generation | Enable off-cloud transcription of pre-recorded video media to improve data discoverability | XML representation of the speech detected in a pre-recorded video | 18/01/2024 | Purchased from a vendor | Raytheon BBN Technologies | Yes | XML representation of the speech detected in a pre-recorded video | Knowledge Preservation Management (KPM) Media | No | None of the above | Yes | ||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-582 | Development of a Planning, Operation, and Control Framework for Hybrid Energy Storage and Renewable Generation Systems | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | One project: will develop the initial framework for planning, operation, and control of these 'hybrid' energy systems containing high penetrations of renewables together with energy storage, non-wires alternatives, and conventional generating resourc | One project: will develop the initial framework for planning, operation, and control of these 'hybrid' energy systems containing high penetrations of renewables together with energy storage, non-wires alternatives, and conventional generating resources by leveraging BNL's expertise in energy storage technologies, probabilistic-based planning and control solutions, and machine learning techniques. | One project: will develop the initial framework for planning, operation, and control of these 'hybrid' energy systems containing high penetrations of renewables together with energy storage, non-wires alternatives, and conventional generating resources by leveraging BNL's expertise in energy storage technologies, probabilistic-based planning and control solutions, and machine learning techniques. | One project: will develop the initial framework for planning, operation, and control of these 'hybrid' energy systems containing high penetrations of renewables together with energy storage, non-wires alternatives, and conventional generating resources by leveraging BNL's expertise in energy storage technologies, probabilistic-based planning and control solutions, and machine learning techniques. | ||||||||||||||||||||
| Department Of Energy | SLAC - SLAC National Accelerator Laboratory (SC43 OIM) | DOE-584 | GitHub Copilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | This AI system does not meet the criteria of any of the six pillars that make up High Impact AI in Memorandum M-25-21 | Generative AI | Enhancement of developer productivity | Cost savings and efficiency | Recommendations for code | 01/07/2025 | Purchased from a vendor | Microsoft | Yes | Recommendations for code | No SLAC Data is used to train the model. Only provided prompts for output | No | None of the above | No | ||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-585 | Accelerated Nanomaterial Discovery | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Historically the discovery and development of new materials has followed an iterative process of synthesis, measurement, and modeling, suitable integration of advanced characterization, robotics, and machine-learning provides an opportunity for radic | Historically the discovery and development of new materials has followed an iterative process of synthesis, measurement, and modeling, suitable integration of advanced characterization, robotics, and machine-learning provides an opportunity for radically accelerating the material design process. The CFN has an established record of discovering nanomaterials by applying new materials synthesis strategies, advanced characterization, and machine-learning. Integrating these efforts will enable autonomous platforms for iteratively exploring material parameter spaces, which have potential to revolutionize materials science by uncovering fundamental links between synthetic pathways, material structure, and functional properties. | Historically the discovery and development of new materials has followed an iterative process of synthesis, measurement, and modeling, suitable integration of advanced characterization, robotics, and machine-learning provides an opportunity for radically accelerating the material design process. The CFN has an established record of discovering nanomaterials by applying new materials synthesis strategies, advanced characterization, and machine-learning. Integrating these efforts will enable autonomous platforms for iteratively exploring material parameter spaces, which have potential to revolutionize materials science by uncovering fundamental links between synthetic pathways, material structure, and functional properties. | Historically the discovery and development of new materials has followed an iterative process of synthesis, measurement, and modeling, suitable integration of advanced characterization, robotics, and machine-learning provides an opportunity for radically accelerating the material design process. The CFN has an established record of discovering nanomaterials by applying new materials synthesis strategies, advanced characterization, and machine-learning. Integrating these efforts will enable autonomous platforms for iteratively exploring material parameter spaces, which have potential to revolutionize materials science by uncovering fundamental links between synthetic pathways, material structure, and functional properties. | ||||||||||||||||||||
| Department Of Energy | WAPA - Western Area Power Administration (PMA) | DOE-586 | GitHub Copilot | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI output does not serve as a principal basis for decisions or action with legal, material, binding or signficant effect on high impact areas. | Natural Language Processing | Improve the quality and speed of code development. | Speed the delivery of code development, edits and troublshooting. | Provides developers with possible code recommendations in appropriate format, identify code errors and suggestions to fix poor performing code. | 31/03/2024 | Purchased from a vendor | Microsoft | Yes | Provides developers with possible code recommendations in appropriate format, identify code errors and suggestions to fix poor performing code. | No agency content used for training. | No | No | Yes | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Establishment of sufficient and periodic training is in-progress | Yes | Not applicable | Direct usability testing | ||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-587 | DIRECTIVES | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | GenAI-RAG on DOE/NNSA content | Productivity Tool | Text | 01/10/2024 | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | SRS - SRNL - Savannah River Site - Savannah River National Laboratory (EM) | DOE-588 | AI-Assisted Strategies and Solutions for Environmental Technology (AI-ASSET) | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | b) Presumed high-impact, but determined not high-impact | Not high-impact | In development stage and impact has not been determined | Agentic AI | Enable rapid technology transfer of the ALTEMIS AI approaches to new sites and new systems through an automated data analysis and knowledge management toolkit | AI-assisted monitoring system design, generalized AIML contaminant forecasting framework, development of end state recommendations that account for site-specific environmental/technological/regulatory constraints, EM knowledge management | Knowledge graphs, analysis output | Knowledge graphs, analysis output | ||||||||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-589 | Google Agentspace / NotebookLM | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Unify enterprise data and enable large-scale agent deployment with Google Agentspace. | Improves enterprise knowledge use and team productivity at scale. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/07/2025 | Purchased from a vendor | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | Y-12 - Consolidated Nuclear Security Y-12 (YFO) | DOE-59 | Cognitive Prescreen Tool (CPT) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | No high-impact category applies | Classical/Predictive Machine Learning | Classification recommendations to assist the Derivative Classifier in making a document's overall classification determination. | Serve as a recommender to Derivative Classifier to assist with document review to improve process accuracy and efficiency, in that order. | Sensitive information detection bound to DOE classification guidance to help reduce IOSC and prevent information loss | 18/12/2019 | Developed in house | Yes | Sensitive information detection bound to DOE classification guidance to help reduce IOSC and prevent information loss | Classification Guides from the CNS Classification Office | No | None of the above | Yes | |||||||||||||
| Department Of Energy | NR HQ - Naval Reactors (NR) | DOE-590 | SpyglassGPT Chat Assistant | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Output of generative AI is administratively prohibited from being used as principal responses in the various scenarios considered high impact. | Generative AI | Aid in routine administrative tasks | Increased efficiency for routine tasks | Varies based on user prompts; it is a general purpose chatbot using government instances of popular Open AI models. | 18/08/2025 | Developed with both contracting and in-house resources | Microsoft | Yes | Varies based on user prompts; it is a general purpose chatbot using government instances of popular Open AI models. | OpenAI trained the models on a mix of publicly available, licensed, and open-source data, including text, code, images, and audio, with no proprietary or user data used without explicit permission. They were tested and aligned for safety and quality, using filtered and optimized subsets of data as appropriate for model size and capabilities. | No | No | https://github.com/open-webui/open-webui | ||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-591 | Custom GenAI for Advanced Reactor Development (LotusAI) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | a) High-impact | High-impact | Generative AI | Engineers spending extensive time manually iterating through documents during the development of the LOTUS Test Bed, and have to frequently answer user questions. | Engineers spending extensive time manually iterating through documents during the development of the LOTUS Test Bed, and have to frequently answer user questions. | Answers to Test Bed design and engineering questions with citations back to source documents. | 04/08/2025 | Developed with both contracting and in-house resources | Open source development | Yes | Answers to Test Bed design and engineering questions with citations back to source documents. | LOTUS Test Bed design and schematic information | Not available | No | Yes | Not available | ||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-592 | Use AI/ML for Climate Prediction | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Two projects: leverage AI/ML tools to synthesize uncertainty quantification-targeted, complex, multi-scale, multi-domain observations into high-resolution processmodels to characterize 4D variability in aerosol dynamics and evolution, which accounts | Two projects: leverage AI/ML tools to synthesize uncertainty quantification-targeted, complex, multi-scale, multi-domain observations into high-resolution processmodels to characterize 4D variability in aerosol dynamics and evolution, which accounts for large uncertainties in climate models; partial differential equation solving using machine learning for simulating the aerosol-cloud-precipitation system that is recognized to be the key in forecasting weather and climate change | Two projects: leverage AI/ML tools to synthesize uncertainty quantification-targeted, complex, multi-scale, multi-domain observations into high-resolution processmodels to characterize 4D variability in aerosol dynamics and evolution, which accounts for large uncertainties in climate models; partial differential equation solving using machine learning for simulating the aerosol-cloud-precipitation system that is recognized to be the key in forecasting weather and climate change | Two projects: leverage AI/ML tools to synthesize uncertainty quantification-targeted, complex, multi-scale, multi-domain observations into high-resolution processmodels to characterize 4D variability in aerosol dynamics and evolution, which accounts for large uncertainties in climate models; partial differential equation solving using machine learning for simulating the aerosol-cloud-precipitation system that is recognized to be the key in forecasting weather and climate change | ||||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-593 | Microsoft CoPilot | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Used for general purpose generative AI within a COTS product. | Generative AI | Generative AI tools like this solve the problem of time-consuming knowledge work by instantly providing expert-level assistance with writing, coding, research, and analysis. They automate repetitive tasks that require human expertise, allowing people | Boost employee productivity by helping to create content, assist in coding, and complex problem-solving tasks, while democratizing access to sophisticated capabilities like writing and design. This technology promises to accelerate innovation by serving as an intelligent collaborator, freeing humans to focus on higher-level strategic and creative work | The outputs of this tool is human-like text, code, and creative content that appears to be written by an expert. These tools produce coherent paragraphs, functional programming code, detailed analyses, creative stories, technical documentation, and conversational responses that are contextually relevant and professionally structured. | 03/07/2025 | Developed with both contracting and in-house resources | Microsoft | No | The outputs of this tool is human-like text, code, and creative content that appears to be written by an expert. These tools produce coherent paragraphs, functional programming code, detailed analyses, creative stories, technical documentation, and conversational responses that are contextually relevant and professionally structured. | Microsoft 365 Copilot is powered by large language models (LLMs) developed by OpenAI (e.g., GPT-4), which are trained on a broad corpus of publicly available data, licensed datasets, and Microsoft-curated content. This includes public web content, books, articles, and licensed third-party data | Not available | Yes | No | Not available | |||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-594 | M&O Program Trimester Reporting Modernization | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Modernize and automate trimester reporting to improve clarity and consistency. | Increases transparency and efficiency in program reporting. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | EE HQ - EE Headquarters (EE) | DOE-595 | AI and natural-language powered search | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not fall within the requirements for high impact. | Classical/Predictive Machine Learning | Solve records retrieval issues. | Efficienct retrieval of records. | Open Text will generate a search results report. | 02/01/2019 | Purchased from a vendor | OpenText | Yes | Open Text will generate a search results report. | Trained using existing internal EERE records currated by subject matter experts. | Yes | None of the above | No | ||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-596 | Microsoft Visual C++ Redistributable | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-597 | Offshore AIIM Dashboard | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Evaluate the integrity of offshore energy infrastructure (e.g., pipelines, platforms) in the U.S Gulf Region. | Evaluate integrity of infrastructure. | Data | Data | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-598 | EDX-ClaiMM | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Natural Language Processing | Address fundamental knowledge gaps and foster the innovation of new techniques for enhanced characterization and recovery of critical minerals and materials (CMMs) within the US. | Address fundamental knowledge gaps. | Data | Data | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-599 | Microsoft Teams | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | 01/02/2023 | Purchased from a vendor | Microsoft | No | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-60 | Machine Learning for Linac Improved Performance | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | In Linacs at FNAL and J-PARC, the current emittance optimization procedure is limited to manual adjustments of a few parameters; using a larger number is not practically feasible for a human operator. Using machine learning (ML) techniques allows li | Daily fluctuations in the Ion Source conditions as well af the effect of environmental changes to RF systems and cavities affect Linac beam. Results include increased beam loss resulting in increased beamline component irradiation, decreased beam intensity to downstream machines affecting Accelerator Complex deliverables, drifts in Linac beam energy directly affecting Booster losses. These drifts are not easily predictable since we do not have environmental control on the RF gallery, not enough instrumentaion in the Ion Source or Linac proper. To counter these effects, we are developing AI-based optimization and modeling, including Bayesian Optimization and surrogate model-based optimization, with the ultimate goal of (near) real-time RF compensation. | Outputs are proposed changes to RF system parameters (cavity phase settings and/or field gradients) to counter the effect of daily drift and to stabilize the output energy. | 25/09/2025 | Developed in house | No | Outputs are proposed changes to RF system parameters (cavity phase settings and/or field gradients) to counter the effect of daily drift and to stabilize the output energy. | Accelerator operations machine data as well as accelerator simulation | No | Yes | unknown | |||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-600 | Storage usage effectiveness and data placement optimization at Data Center | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | The goal of this project is to take data management for data centers to the next level by implementing Artificial Intelligence (AI) and Machine Learning (ML) to create a precise data use prediction model to aid important business and operational deci | The goal of this project is to take data management for data centers to the next level by implementing Artificial Intelligence (AI) and Machine Learning (ML) to create a precise data use prediction model to aid important business and operational decisions | The goal of this project is to take data management for data centers to the next level by implementing Artificial Intelligence (AI) and Machine Learning (ML) to create a precise data use prediction model to aid important business and operational decisions | The goal of this project is to take data management for data centers to the next level by implementing Artificial Intelligence (AI) and Machine Learning (ML) to create a precise data use prediction model to aid important business and operational decisions | ||||||||||||||||||||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-601 | xAI Grok Pilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | General chatbot knowledge and capabilities | Generative AI | Quick access to general knowledge. | Ability to quickly find and access general knowledge and business productivity | General answers to questions, summarized documentation, document creation, general research. | 30/06/2025 | Purchased from a vendor | xAI | No | General answers to questions, summarized documentation, document creation, general research. | Not involved in training | No | No | None of the above | No | N/A | In-Progress | Not applicable | Not applicable | Other | ||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-602 | Microsoft Search in Bing | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | WAPA - Western Area Power Administration (PMA) | DOE-603 | Microsoft Copilot (Pilot) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | AI output does not serve as a principal basis for decisions or action with legal, material, binding or signficant effect on high impact areas. | Natural Language Processing | Improve general office productivity | Content creation and drafting, data analysis and summerization, personalized learning and research | Content review recommendations, content summaries, how to instructions. | 01/08/2025 | Purchased from a vendor | Microsoft | Yes | Content review recommendations, content summaries, how to instructions. | No agency content used for training. | No | No | In-Progress | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Yes | Not applicable | In-Progress | ||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-604 | FindMATID | Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | Search for corporate material IDs | Maxmize use of Strategic Agreements | Material ID | 01/10/2024 | Developed in house | Yes | Material ID | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-606 | AI Enabled Code Review | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | a) High-impact | High-impact | Generative AI | Enhance application development lifecycle capabilities | Expidited code production | Code | Code | |||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-608 | Microsoft Teams Classic | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-609 | Custom GenAI for Advanced Reactor Development (RickoverAI) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | a) High-impact | High-impact | Generative AI | Reactor engineers spending extensive time manually iterating through documents during the reactor engineering process | Reactor engineers save time by using the AI to quickly find answers and relevant source documents. | Answers to reactor design and engineering questions with citations back to source documents. | 04/08/2025 | Developed with both contracting and in-house resources | Open source development with INL Advanced Analytics Center of Excellence | Yes | Answers to reactor design and engineering questions with citations back to source documents. | Advanced reactor engineering schematics | Not available | No | Yes | Not available | ||||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-61 | AI Denoising | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety, stratey,. | ||||||||||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-610 | Report Assistant | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | This use case does not significantly affect legal, material, or binding rights or critical access to services. | Classical/Predictive Machine Learning | Management spends excessive hours generating and reformatting reports for various customers/audiences this will reduce the learing curve and effort to translate information into the various formats, saving consideratble time and effort. | This will reduce the time and effort required to generate reports, allowing management to focus on more critical tasks. It is expected to halve the time spent on report generation in the first year. | The AI system will generate draft reports for review and finalization. | 01/10/2024 | Developed in house | The AI system will generate draft reports for review and finalization. | ||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-611 | AI-Tailored Learning Management Solutions | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | This AI use case focuses on enhancing training materials and does not significantly affect legal, material, or binding rights or critical access to services. | Generative AI | To improve the quality, relevance, and delivery of training materials, ensuring they are tailored and effective for users. | Improved training materials, dynamic learning tailored to user needs, real-time feedback, comprehensive evaluation, and higher staff effectiveness and readiness. | The AI system outputs training materials with dynamic questions and feedback, detailed performance reports, and personalized learning paths. | 01/10/2024 | Developed in house | SRNS - OT In House Staff | The AI system outputs training materials with dynamic questions and feedback, detailed performance reports, and personalized learning paths. | |||||||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-612 | Objective-Driven Data Reduction for Scientific Workflows | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | This project aims to develop theories and algorithms for objective-driven reduction of scientific data in workflows that are composed of various models, including data-driven AI models | This project aims to develop theories and algorithms for objective-driven reduction of scientific data in workflows that are composed of various models, including data-driven AI models | This project aims to develop theories and algorithms for objective-driven reduction of scientific data in workflows that are composed of various models, including data-driven AI models | This project aims to develop theories and algorithms for objective-driven reduction of scientific data in workflows that are composed of various models, including data-driven AI models | ||||||||||||||||||||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-613 | LivChat | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | General business process and policy knowledge. | Generative AI | Quick access to general policy and process knowledge. | Ability to quickly find and access internal process and procedure knowledge, and and business productivity | General answers to questions, summarized documentation, internal process and policy information. | 30/06/2025 | Developed in house | Yes | General answers to questions, summarized documentation, internal process and policy information. | Not involved in training | No | No | None of the above | Yes | N/A | Yes | Not applicable | Not applicable | Other | |||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-614 | Autonomous, real-time guiding of BCP film synthesis | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Adaptive synthesis and manufacturing processes will be realized by combining AI/ML methods for autonomous on-the-fly control of the deposition and processing of self-assembling polymer films. | Adaptive synthesis and manufacturing processes will be realized by combining AI/ML methods for autonomous on-the-fly control of the deposition and processing of self-assembling polymer films. | Adaptive synthesis and manufacturing processes will be realized by combining AI/ML methods for autonomous on-the-fly control of the deposition and processing of self-assembling polymer films. | Adaptive synthesis and manufacturing processes will be realized by combining AI/ML methods for autonomous on-the-fly control of the deposition and processing of self-assembling polymer films. | ||||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-615 | Ask IT | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | GenAI-RAG on IT content | Productivity Tool | Text | 01/10/2024 | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-616 | ServiceNow Now Assist (AskIT) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Used for Help Desk Function | Other | Service Desk operations | Improve the timeliness and quality of transactional service desk requests through improved incident search, resolution, feedback as well as AI-assisted coding of workflows. | Outputs are consistent with commercial service desk products, including the search, creation and closure of service requests. | Developed with both contracting and in-house resources | Not available | Yes | Outputs are consistent with commercial service desk products, including the search, creation and closure of service requests. | This use case will utilize service desk incident, problem and knowledge management sources for training and use. | Not available | Yes | Yes | Not available | ||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-617 | Low Dose Radiation Biology | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Predicting the effects of low dose radiation | Predicting the effects of low dose radiation | Predicting the effects of low dose radiation | Predicting the effects of low dose radiation | ||||||||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-618 | Claude Anthropic Enterprise | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Deliver secure enterprise AI with Claude's large-context models and code generation. | Offers secure, large-context AI tools for research and analysis. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/08/2025 | Purchased from a vendor | Anthropic | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-619 | KBase: An Integrated Knowledgebase for Predictive Biology and Environmental Research | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Building knowledgebase for systems biology and enabling the predictive analysis using web interface | Building knowledgebase for systems biology and enabling the predictive analysis using web interface | Building knowledgebase for systems biology and enabling the predictive analysis using web interface | Building knowledgebase for systems biology and enabling the predictive analysis using web interface | ||||||||||||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-62 | Next-Generation Beam Cooling and Control with Optical Stochastic Cooling | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Reinforcement Learning | This program leverages the physics and technology of optical stochastic cooling (OSC) to explore new possibilities in beam control and sensing. The planned architecture and performance of a new OSC system at IOTA should enable turn-by-turn programma | This effort focuses on enhanced real-time control of the structure of circulating particle beams. The additional performance and capabilities provided may enable substantially greater operational flexibility and science reach at current and future DOE accelerator facilities. | The AI system will continuously infer the state of a circulating beam distribution and then use this inference in the execution of an RL-based control policy. The primary means of control is an advanced optical stochastic cooling system. | No | The AI system will continuously infer the state of a circulating beam distribution and then use this inference in the execution of an RL-based control policy. The primary means of control is an advanced optical stochastic cooling system. | Large-scale simulation data is being used to train the diagnostic and control systems. Online training with experimental data may also be leveraged once the system is operational. | No | Yes | ||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-620 | AI Video Creation | Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Doesn't meet the criteria. | Generative AI | Enhance internal communications and streamline Learning and Development | Tool that lab staff can use to facilitate creation of generative AI enabled media | Multi-modal media output | Multi-modal media output | ||||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-621 | ChatGPT | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Used for general purpose generative AI within a COTS product. | Generative AI | Generative AI tools like this solve the problem of time-consuming knowledge work by instantly providing expert-level assistance with writing, coding, research, and analysis. They automate repetitive tasks that require human expertise, allowing people | Boost employee productivity by helping to create content, assist in coding, and complex problem-solving tasks, while democratizing access to sophisticated capabilities like writing and design. This technology promises to accelerate innovation by serving as an intelligent collaborator, freeing humans to focus on higher-level strategic and creative work | The outputs of this tool is human-like text, code, and creative content that appears to be written by an expert. These tools produce coherent paragraphs, functional programming code, detailed analyses, creative stories, technical documentation, and conversational responses that are contextually relevant and professionally structured. | 25/06/2025 | Developed with both contracting and in-house resources | Open AI | No | The outputs of this tool is human-like text, code, and creative content that appears to be written by an expert. These tools produce coherent paragraphs, functional programming code, detailed analyses, creative stories, technical documentation, and conversational responses that are contextually relevant and professionally structured. | "ChatGPT (including GPT-4 and GPT-5) is trained on a large and diverse corpus of publicly available and licensed data. This includes: public internet text (websites, articles, forums, books); licensed datasets from publishers and providers; data created by human trainers to refine performance. Importantly, ChatGPT is not trained on proprietary or private company data." | Not available | No | No | Not available | |||||||||||
| Department Of Energy | EE HQ - EE Headquarters (EE) | DOE-622 | Enhancing the circularity: Cost effective battery de-energization, disassembly, and pre-processing (CEBDDP) | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Will serve as basis for decisions on use of end-of-first life batteries | Classical/Predictive Machine Learning | Predict state of health and state of function of spent batteries | Improved ability for prediction to enable potential reuse of batteries, towards reducing costs of batteries | Prediction of battery state of health | Prediction of battery state of health | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-623 | Microsoft MSPaint | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-624 | AGN-201 DT | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | This deals with international safeguards approaches. | Classical/Predictive Machine Learning | Aid safeguards analysts for determining if a reactor is being used in a non-declared way. | Reduce the burden for inspectors by increasing synthesizing data and announcing anomalies. | Flag for when off-normal operations is detected along with expected material generated. | Developed with both contracting and in-house resources | Not available | No | Flag for when off-normal operations is detected along with expected material generated. | Not available | Not available | No | No | Not available | ||||||||||||
| Department Of Energy | Pantex - PanTeXas Deterrence Pantex (PFO) | DOE-625 | Intrabot | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | The AI output is not a basis for decisions or actions. It is for information retrieval. | Generative AI | The AI is intended to improve efficiency by providing plant personnel with a faster way to access information on the Pantex Intranet. | The expected benefit is increased effiency for plant personnel. | Questions posed by plant personnel are answered by leveraging the knowledge corpus and providing answers in natural language text and links to relevent Intranet pages. | Developed in house | No | Questions posed by plant personnel are answered by leveraging the knowledge corpus and providing answers in natural language text and links to relevent Intranet pages. | ||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-626 | AI Drafting of Operational Procedures and Training Materials | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | This AI use case does not significantly affect legal, material, or binding rights or critical access to services. | Generative AI | "To automate and enhance the drafting of operational procedures and training materials, reducing time, effort, and errors." | "Reduced development time, reduced errors, and rework, better context understanding, and improved standardization and formatting, cutting time spent on document creation by half." | "The AI system outputs draft operational procedures and training materials which need to be reviewed and corrected by users." | 01/10/2024 | Developed in house | SRNS - OT In House Staff | "The AI system outputs draft operational procedures and training materials which need to be reviewed and corrected by users." | |||||||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-627 | lntegrated Platform for Multimodal Data Capture, Exploration and Discovery Driven by Al Tools | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | This project will enable and accelerate scientific discovery by leveraging large complex multimodal datasets generated at BES user facilities, develop shared transferrable infrastructure to store, curate, analyze and disseminate the data. Additional | This project will enable and accelerate scientific discovery by leveraging large complex multimodal datasets generated at BES user facilities, develop shared transferrable infrastructure to store, curate, analyze and disseminate the data. Additionally, we will build data analysis tools that reveal correlations in multimodal data and apply Machine Learning (ML) methods and train artificial Intelligence (AI) models that efficiently extract synergistic physical information and embed such models in new workflows for rapid scientific discovery. | This project will enable and accelerate scientific discovery by leveraging large complex multimodal datasets generated at BES user facilities, develop shared transferrable infrastructure to store, curate, analyze and disseminate the data. Additionally, we will build data analysis tools that reveal correlations in multimodal data and apply Machine Learning (ML) methods and train artificial Intelligence (AI) models that efficiently extract synergistic physical information and embed such models in new workflows for rapid scientific discovery. | This project will enable and accelerate scientific discovery by leveraging large complex multimodal datasets generated at BES user facilities, develop shared transferrable infrastructure to store, curate, analyze and disseminate the data. Additionally, we will build data analysis tools that reveal correlations in multimodal data and apply Machine Learning (ML) methods and train artificial Intelligence (AI) models that efficiently extract synergistic physical information and embed such models in new workflows for rapid scientific discovery. | ||||||||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-628 | Improve Scout Search results | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Improve enterprise search with natural language prompts and personalization. | Provides faster, more accurate access to enterprise knowledge. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | SRS - SRNL - Savannah River Site - Savannah River National Laboratory (EM) | DOE-629 | Consolidated Nuclear Waste Glass Database | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | b) Presumed high-impact, but determined not high-impact | Not high-impact | In development stage and impact has not been determined | Classical/Predictive Machine Learning | Incorporation of several physics-driven machine learning models to predict the properties of nuclear waste glass compositions – in addition, bootstrap other glass computational science models such as GlassPy and GlassNet to the database | Develop an opensource, online database consisting of property information for nuclear waste glass data generated by various national laboratories over several decades | Textual output; Chemical glass composition | Textual output; Chemical glass composition | ||||||||||||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-63 | In-storage computing for multi-messenger astronomy in neutrino experiments and cosmological surveys | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project aims to address the big-data challenges and stringent time constraints facing multi-messenger astronomy (MMA) in neutrino experiments and cosomological surveys. Instead of following the traditional computing paradigm of moving data to th | The purpose is to enhance the ability of large scale neutrino experiments like DUNE to detect neutrinos from core-collapse supernovas (CCSNs) and to extract useful information about their source in real time to provide prompt multi-messenger alerts to other observatories. Aside from enabling prompt SN pointing that is also precise, this will cut down the rate of fake SN triggers (curretnly estimated at ~1/month) and therefore offer potential savings from a reduction in the hardware resources required for storing the large amounts of data associated with CCSN candidates. | The output of the AI system is a set of predictions which will be used as the basis for a drastic reduction in the amount of data to be fed to the next stage involving reconstruction and analysis. Before feeding the data to this stage, the AI system will also perform preprocessing operations such as noise removal to facilitate and speed up subsequent data processing. | No | The output of the AI system is a set of predictions which will be used as the basis for a drastic reduction in the amount of data to be fed to the next stage involving reconstruction and analysis. Before feeding the data to this stage, the AI system will also perform preprocessing operations such as noise removal to facilitate and speed up subsequent data processing. | Simulated data closely approximating real-world raw detector data expected from CCSNs is used to train and validate the ML models used in the data reduction and preprocessing pipeline. | No | Yes | ||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-630 | HR Job Postings | Pre-deployment – The use case is in a development or acquisition status. | Human Resources | Pre-deployment | a) High-impact | High-impact | Not available | Generative AI | Generic or ineffective job postings | Enhances applicant diversity and job match quality by optimizing language and structure in postings, improving recruitment outcomes. | Improved job postings | 01/12/2025 | Developed in house | Yes | Improved job postings | Not available | No | Yes | Not available | |||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-631 | Visual Studio Community | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | 01/02/2023 | Purchased from a vendor | Microsoft | No | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-632 | Invoice Scanning System | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | This AI system does not provide a principle basis for decisions or actions with legal, material binding, or have any significat effects of those items listed in OMB-25-21 in the definition of High Impact. | Classical/Predictive Machine Learning | Missing information in subcontract submittals | greater accuracy of information on required forms | Analytics | 04/09/2024 | Developed in house | Yes | Analytics | Subcontractor Database | Not applicable | No | Not applicable | None of the above | Yes | Not applicable | Yes | Not applicable | Impacts were assessed by the software owner and developer | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-633 | Facilities Visual Inspection | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Computer Vision | Detect hazards and assess facility conditions through AI-enhanced visual inspections. | Improves workplace safety and hazard detection in facilities. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-634 | Microsoft OneDrive | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-635 | QuantomVision | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | a) High-impact | High-impact | Generative AI | Workforce upskilling | Predict workforce evolution and analiyze skills mapping. | Predict workforce evolution and analiyze skills mapping. | Open AI | Predict workforce evolution and analiyze skills mapping. | na | |||||||||||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-637 | ServiceNow Cluster Analysis | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | ServiceNow's AI output is NOT a principal driver of a government decision that would meaningfully affects people's rights, safety, critical access, or strategic assets. AI in ServiceNow is strictly used to help manage IT resources for LANL. | Classical/Predictive Machine Learning | Identify patterns of tickets created to determine workflow and automated solutions | Improved automation | Recommendation | 01/11/2024 | Purchased from a vendor | ServiceNow | Yes | Recommendation | Existing LANL IT ticket data stored in GCC Data Center certified as FedRAMP High. Training servers in same environment. | No | None of the above | No | Yes | Minimal impact as AI is used purely for predictive purposes to aid trained technicians. Humans remain the decision makers whether to use the AI's output. | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Yes, an appropriate appeal process has been established | Other | ||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-638 | ChatSRS | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | General text-oriented chat (e.g., summarization, generation) | Productivity Tool | Text | 01/10/2024 | Developed in house | Yes | Text | None | No | None of the above | No | Yes | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-64 | high level synthesis for machine learning (previously hls4ml) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project develops hardware-software AI codesign tools for FPGAs and ASICs for algorithms running at the extreme edge. | hls4ml is used to implement specialized AI algorithms in embedded hardware. This is valuable across a wide range of scientific applications for enabling real-time processing capabilitles. This can accelerate sientific discovery and time to science thus enabling large cost savings and DOE scientific prestige. | It can be an AI algorithm from prediction to data compression to control (decision making). | 25/09/2025 | Developed in house | No | It can be an AI algorithm from prediction to data compression to control (decision making). | research datasets from scientific experiments | No | Yes | unknown | |||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-640 | Knowledge Capture Agent (KCA) | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | a) High-impact | High-impact | Generative AI | Capturing tenured employees experience based knowledge. | To pass down this experience and make it into a database | Feeding a database that can later be querired by newcomers. | Feeding a database that can later be querired by newcomers. | |||||||||||||||||||||
| Department Of Energy | EE HQ - EE Headquarters (EE) | DOE-641 | AI Support Agent Chatbot | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Provides advisory information only; does not make binding decisions. | Generative AI | Manual helpdesk tickets consume staff time and delay issue resolution. | Reduce Tier-1 support time, improve satisfaction with 24/7 responses, free staff for complex issues. | Conversational responses, step-by-step guidance, and links to support documentation. | Conversational responses, step-by-step guidance, and links to support documentation. | ||||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-642 | UNSPSC Codes | Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | Map requisition items to UNSPSC codes | Maxmize use of Strategic Agreements and facilitate SCM reporting | UNSPSC Codes | 01/10/2024 | Developed in house | Yes | UNSPSC Codes | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-643 | ServiceNow Similarity Analysis | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | ServiceNow's AI output is NOT a principal driver of a government decision that would meaningfully affects people's rights, safety, critical access, or strategic assets. AI in ServiceNow is strictly used to help manage IT resources for LANL. | Classical/Predictive Machine Learning | Identify patterns of tickets created to determine workflow and automated solutions | Improved automation | Recommendation | 01/11/2024 | Purchased from a vendor | ServiceNow | Yes | Recommendation | Existing LANL IT ticket data stored in GCC Data Center certified as FedRAMP High. Training servers in same environment. | No | None of the above | No | Yes | Minimal impact as AI is used purely for predictive purposes to aid trained technicians. Humans remain the decision makers whether to use the AI's output. | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Yes, an appropriate appeal process has been established | Other | ||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-644 | Accelerating HEP Science: Inference and Machine Learning at Extreme Scales | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Developing Galaxy Image deblending with the gravitational lensing effects and scaling AI/ML algorithms | Developing Galaxy Image deblending with the gravitational lensing effects and scaling AI/ML algorithms | Developing Galaxy Image deblending with the gravitational lensing effects and scaling AI/ML algorithms | Developing Galaxy Image deblending with the gravitational lensing effects and scaling AI/ML algorithms | ||||||||||||||||||||
| Department Of Energy | LM HQ - Office of Legacy Management (LM) | DOE-645 | Georeference Figures | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | This AI use case does not serve as a principal basis for decisions or actions with legal, material, and binding | Classical/Predictive Machine Learning | AI-powered georeferencing will improve automation, speed, and efficiency. Manual georeferencing is expensive. | Georeferenced of historical paper maps. | AI outputs georeferenced vector file which will be evaluated by humans | 01/03/2025 | Purchased from a vendor | Tesseract | Yes | AI outputs georeferenced vector file which will be evaluated by humans | Output is compared with our aerial baseline. | No | None of the above | No | In-Progress | Improved management of the remedy | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Yes | Not applicable | Direct usability testing | ||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-646 | Microsoft Discovery | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Speed up transparent, governed R&D discovery with AI agents and graph knowledge engines. | Accelerates innovation with transparent, AI-driven R&D processes. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed with both contracting and in-house resources | Microsoft | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | Pantex - PanTeXas Deterrence Pantex (PFO) | DOE-647 | CyberSearch | Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | c) Not high-impact | Not high-impact | The AI output is not a basis for decisions or actions. It is for information retrieval. | Generative AI | The AI is intended to improve the efficiency of plant personnel by giving them an easy way to access and search cybersecurity documents. | The expected benefit is increased effiency for plant personnel. | Questions posed by plant personnel are answered by leveraging the knowledge corpus and providing answers in natural language text and links to relevent documents. | Developed in house | No | Questions posed by plant personnel are answered by leveraging the knowledge corpus and providing answers in natural language text and links to relevent documents. | ||||||||||||||||||
| Department Of Energy | SLAC - SLAC National Accelerator Laboratory (SC43 OIM) | DOE-648 | O365 Copilot | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | This AI system does not meet the criteria of any of the six pillars that make up High Impact AI in Memorandum M-25-21 | Generative AI | Improve day-to-day business functionality | Automation of redundant business functions to increase efficiency | Recommendations to provide insight for better decision making, scheduling, notetaking, and data analysis | Yes | Recommendations to provide insight for better decision making, scheduling, notetaking, and data analysis | |||||||||||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-65 | Streamining intelligent detectors for sPHENIX/EIC | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | his project develops real-time algorithms for event filtering with tracking detectors for nuclear physics collider experiments. | AI tools are developed for embedded inference in real-time processing systems for scientific experiments such as sPHENIX and upcoming EIC. This can accelerate sientific discovery and time to science thus enabling large cost savings and DOE scientific prestige. | It can be an AI algorithm from prediction to data compression to control (decision making). | No | It can be an AI algorithm from prediction to data compression to control (decision making). | research datasets from scientific experiments | No | Yes | ||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-650 | Critical Materials | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | a) High-impact | High-impact | Generative AI | AI will make discoveries of unknown combinations of rare earth elements and ligands used in critical materials that are currently impossible to separate | Make discoveries of combinations of rare earth elements and ligands used in critical materials efficiently, faster, less costly | New datasets and benchmarks | New datasets and benchmarks | |||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-651 | Microsoft 365 | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-652 | Advanced Fuels Campaign | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | a) High-impact | High-impact | Other | Not available | Accelerates fuel development cycles and improves performance predictions, reducing R&D costs and time to deployment. | Information summaries | Developed with both contracting and in-house resources | Not available | No | Information summaries | Not available | No | Yes | Not available | ||||||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-653 | Hanford Ai Liaison (HAL) 2.0 | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | This AI system does not provide a principle basis for decisions or actions with legal, material binding, or have any significat effects of those items listed in OMB-25-21 in the definition of High Impact. | Generative AI | connecting business data streams to AI for increased analysis and work efficiency | cost savings, increased efficiency, increased productivity, greater analytics of data | text answers to input questions | 01/09/2025 | Developed in house | Yes | text answers to input questions | Pre-trained from OpenAI with access to Hanford specific data sources (search, popfon, and ESP) | Not applicable | No | Not applicable | None of the above | Yes | Not applicable | Yes | Not applicable | Potential impacts were assessed by the Hanford AI SME | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-654 | ATLAS | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Used for technology deployment function. | Other | Make processes in Technology Deployment more effective and efficient | Enables faster data discovery and analysis across large datasets, improving research productivity and insight generation. | Information summaries, fact sheets, marketing guides, propasal drafts, and categorization guidance | No | Information summaries, fact sheets, marketing guides, propasal drafts, and categorization guidance | Currently utilizing Azure OpenAI and HPC LLM's. Intending to utilize RAG model for DOE Tech Transfer specific information. | No | No | Yes | |||||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-655 | Continuous Structure Descriptors for XANES Interpretation | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Seeking a continuous local structure motif that correlates to X-ray spectral signatures | Seeking a continuous local structure motif that correlates to X-ray spectral signatures | Seeking a continuous local structure motif that correlates to X-ray spectral signatures | Seeking a continuous local structure motif that correlates to X-ray spectral signatures | ||||||||||||||||||||
| Department Of Energy | NR HQ - Naval Reactors (NR) | DOE-656 | Azure AI Search | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Azure AI Search indexes data. The end user is responsible for the verification of the data and the final use. This does not meet the criteria outlined in section 5 of OMB Memorandum M-25-21. | Natural Language Processing | Enhance the capability to find structured and unstructured data within databases and datalakes. | Improve efficiency of finding a data source. Azure AI Search also creates an internal knowledge base for use in a LLM. | AI Search creates a vectorized database. | AI Search creates a vectorized database. | ||||||||||||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-657 | ServiceNow Now Assist | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | ServiceNow's AI output is NOT a principal driver of a government decision that would meaningfully affects people's rights, safety, critical access, or strategic assets. AI in ServiceNow is strictly used to help manage IT resources for LANL. | Agentic AI | Provide Agentic AI capabilities for use in multiple use cases for the IT divisions at LANL | Improved automation | Recommendation | 01/11/2024 | Purchased from a vendor | ServiceNow | Yes | Recommendation | Existing LANL IT ticket data stored in GCC Data Center certified as FedRAMP High. Training servers in same environment. | No | None of the above | No | Yes | Minimal impact as AI is used purely for predictive purposes to aid trained technicians. Humans remain the decision makers whether to use the AI's output. | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Yes, an appropriate appeal process has been established | Other | ||||
| Department Of Energy | ANL - Argonne National Laboratory (SC43 OIM) | DOE-658 | pComply-AI-High Risk | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Impact of system failure or erroneous results is negligible. | Classical/Predictive Machine Learning | Preemptive procurement compliance: daily scans of recent PARIS requisition records. | Developed as a rapid response for a corrective action to eliminate incidents when high-risk materials are procured inadvertently without the required review processes and handling. | Solution features an interactive report dashboard and automatic email notifications to persons involved with the requisition when a high-risk scenario is identified. | 01/01/2024 | Developed in house | Yes | Solution features an interactive report dashboard and automatic email notifications to persons involved with the requisition when a high-risk scenario is identified. | ANL Operational data. | No | No | Yes | Not applicable | Yes | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Direct usability testing | |||||||
| Department Of Energy | LM HQ - Office of Legacy Management (LM) | DOE-659 | Groundwater Modeling | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | This AI use case does not serve as a principal basis for decisions or actions with legal, material, and binding | Classical/Predictive Machine Learning | Solves critical challenges in water modeling by providing forward-looking monitoring. | Forward cast ground water behavior | AI outputs is a ground water model and that model will be evaluated by humans. | 02/01/2003 | Purchased from a vendor | PEST | Yes | AI outputs is a ground water model and that model will be evaluated by humans. | Data is held back from the models to validate model outputs. | No | None of the above | No | Yes | Improved management of the remedy | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Yes | Not applicable | Direct usability testing | ||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-66 | In-pixel AI for future tracking detectors | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project explores novel AI-on-chip technology for intelligent detectors embedded with sensing technology | AI algorithms are implemented in on-detector electronics in order to reduce data size and enable processing at high rates. | A recommendation of whether to save data based on AI classifier. Or, a fast inference of track parameters to be used for fast selection | 25/09/2025 | Developed in house | No | A recommendation of whether to save data based on AI classifier. Or, a fast inference of track parameters to be used for fast selection | Accelerator operations machine data | No | Yes | unknown | |||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-660 | Instrument Documentation Search | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Provide quick answers about instrument operations by searching ingested manuals. | Speeds troubleshooting and learning of lab instruments. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-661 | MOOSE-LLM | Pilot – The use case has been deployed in a limited test or pilot capacity. | Energy & the Environment | Pilot | c) Not high-impact | Not high-impact | Focusing on the application of generative AI on modeling and simulation tasks | Generative AI | Tools like MOOSE for multiphysics modeling have a steep learning curve. They demand considerable domain-specific knowledge, which makes it hard for newcomers to get started. | Improves user experience and reduces training time by enabling natural language assistance within the MOOSE simulation framework. | Improved documentation, inputfile completion, convergence analysis | 26/08/2025 | Developed in house | Open source | No | Improved documentation, inputfile completion, convergence analysis | This use case uses open source code documentation, open source large language models and retrivial argurment generation to build a MOOSE modeling and simulation AI assistant | yes | No | Yes | under SDR process, currently under INL gitlab https://hpcgitlab.hpc.inl.gov/idaholab/moosenger | MOOSENger saves times, reduces errors while building MOOSE multiphysics model, streamline the modeling and simulation workflow | ||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-662 | DevSec Ops AI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Used for general purpose IT M&O. | Other | Not available | accelerates secure software delivery | Information summaries | Developed with both contracting and in-house resources | Not available | No | Information summaries | No | No | Yes | Not available | |||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-663 | Machine Learning for Autonomous Control of Scientific User Facilities | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | BNL will work alongside SLAC, to implement ML algorithm(s) into NSLS-II Operations to interpret accelerator data more intelligently. We intend to train said algorithms with 5+ years of archived device-data from accelerator components, records of pre | BNL will work alongside SLAC, to implement ML algorithm(s) into NSLS-II Operations to interpret accelerator data more intelligently. We intend to train said algorithms with 5+ years of archived device-data from accelerator components, records of previous fault causes (to connect to data-symptoms) and stored beam current. | BNL will work alongside SLAC, to implement ML algorithm(s) into NSLS-II Operations to interpret accelerator data more intelligently. We intend to train said algorithms with 5+ years of archived device-data from accelerator components, records of previous fault causes (to connect to data-symptoms) and stored beam current. | BNL will work alongside SLAC, to implement ML algorithm(s) into NSLS-II Operations to interpret accelerator data more intelligently. We intend to train said algorithms with 5+ years of archived device-data from accelerator components, records of previous fault causes (to connect to data-symptoms) and stored beam current. | ||||||||||||||||||||
| Department Of Energy | Pantex - PanTeXas Deterrence Pantex (PFO) | DOE-664 | Preventive Maintenance Procedure Development | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | a) High-impact | High-impact | Generative AI | The AI is intended to solve the inefficiency and potential for error in manually developing and updating preventive maintenance procedures. It addresses the challenge of synthesizing information from multiple, diverse data sources to ensure complianc | The expected benefits include, increased productivity, reduced manual research time, minimized errors, improved compliance, enhanced equipment uptime, and cost savings. | The system's outputs are comprehensive, compliant, and detailed preventive maintenance procedures for facilities and equipment. | Developed in house | The system's outputs are comprehensive, compliant, and detailed preventive maintenance procedures for facilities and equipment. | ||||||||||||||||||||
| Department Of Energy | EE HQ - EE Headquarters (EE) | DOE-665 | OpenText for Records Management (File share auto-classification) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not fall within the requirements for high impact. | Classical/Predictive Machine Learning | Records retention and disposition. | Reduce time required to classify legacy records accumulated over 20 years. | OpenText will categorize each file located on the network drives. | 24/10/2022 | Purchased from a vendor | OpenText | Yes | OpenText will categorize each file located on the network drives. | Trained using existing internal EERE records currated by subject matter experts. | Yes | None of the above | No | ||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-666 | Towards Edge Computing: A Software and Hardware Co-Design Methodology for Application-Specific Integrated Circuit (ASIC)-based Scientific Neuromorphic Computing (NC) | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | One project: Current deep neural network (DNN)-based Artificial Intelligence (AI) algorithms have already been successfully applied to particle physics applications. This project will develop a co-design approach for methodologies and their implement | One project: Current deep neural network (DNN)-based Artificial Intelligence (AI) algorithms have already been successfully applied to particle physics applications. This project will develop a co-design approach for methodologies and their implementation of edge computing for optimal handling of data streams | One project: Current deep neural network (DNN)-based Artificial Intelligence (AI) algorithms have already been successfully applied to particle physics applications. This project will develop a co-design approach for methodologies and their implementation of edge computing for optimal handling of data streams | One project: Current deep neural network (DNN)-based Artificial Intelligence (AI) algorithms have already been successfully applied to particle physics applications. This project will develop a co-design approach for methodologies and their implementation of edge computing for optimal handling of data streams | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-667 | Microsoft Visual C++ Minimum Runtime | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-668 | AI/ML for Applications in High Energy adn Nuclear Physics | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Develop state-of-the-art cycle-consistent GANs to bridge the gap between simulations and experimental data; develop real-time particle tracking with deep learning on field programmable gate arrays; explore the challenges of deploying ML modeling onto | Develop state-of-the-art cycle-consistent GANs to bridge the gap between simulations and experimental data; develop real-time particle tracking with deep learning on field programmable gate arrays; explore the challenges of deploying ML modeling onto real-time inference hardware - for High Energy or Nuclear Physics | Develop state-of-the-art cycle-consistent GANs to bridge the gap between simulations and experimental data; develop real-time particle tracking with deep learning on field programmable gate arrays; explore the challenges of deploying ML modeling onto real-time inference hardware - for High Energy or Nuclear Physics | Develop state-of-the-art cycle-consistent GANs to bridge the gap between simulations and experimental data; develop real-time particle tracking with deep learning on field programmable gate arrays; explore the challenges of deploying ML modeling onto real-time inference hardware - for High Energy or Nuclear Physics | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-669 | Microsoft OneDrive MUI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-67 | SONIC: AI acceleration as a service | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project focuses on integration of AI hardware for at-scale inference acceleration for particle physics experiments. | SONIC is used to accelerate AI workloads on coprocessors in scientific experiments. This can accelerate sientific discovery and time to science thus enabling large cost savings and DOE scientific prestige. | It can be an AI algorithm from prediction to data compression to control (decision making). | 25/09/2025 | Developed in house | No | It can be an AI algorithm from prediction to data compression to control (decision making). | research datasets from scientific experiments | No | Yes | unknown | |||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-670 | RAPIDS3: A SciDAC Institute for Computer Science, Data, and Artificial Intelligence | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | A SciDAC computer science institute and BNL is co-leading AI team | A SciDAC computer science institute and BNL is co-leading AI team | A SciDAC computer science institute and BNL is co-leading AI team | A SciDAC computer science institute and BNL is co-leading AI team | ||||||||||||||||||||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-671 | LISA Chatbot Pilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Not being used to make critical runtime decisions. | Generative AI | Quick access to specific mission data sets and documentation | Ability to quickly find and access relevant mission data and experiment documentation | Specific, data-driven answers to mission science questions | 30/06/2025 | Developed with both contracting and in-house resources | AWS | No | Specific, data-driven answers to mission science questions | Mission science data and dcoumentation | No | No | None of the above | Yes | N/A | In-Progress | Not applicable | Not applicable | Other | ||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-672 | Use AI/ML to Enhance the Bioimaging Capabilities at Brookhaven National Laboratory (BNL) | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Three projects: use expertise in ML as one element in building an integrated multiscale bioimaging capability at BNL; use AI/ML tools use to accelerate the analysis of protein structure and function; develop a high-resolution AI-led analysis of nondi | Three projects: use expertise in ML as one element in building an integrated multiscale bioimaging capability at BNL; use AI/ML tools use to accelerate the analysis of protein structure and function; develop a high-resolution AI-led analysis of nondisrputive, time-resolved light microscopy images measured at lower spatial resolutions | Three projects: use expertise in ML as one element in building an integrated multiscale bioimaging capability at BNL; use AI/ML tools use to accelerate the analysis of protein structure and function; develop a high-resolution AI-led analysis of nondisrputive, time-resolved light microscopy images measured at lower spatial resolutions | Three projects: use expertise in ML as one element in building an integrated multiscale bioimaging capability at BNL; use AI/ML tools use to accelerate the analysis of protein structure and function; develop a high-resolution AI-led analysis of nondisrputive, time-resolved light microscopy images measured at lower spatial resolutions | ||||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-673 | Climate Weather Data | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Targeted for weather and climate related use cases | Other | Not available | speeds up climate model analysis | Not available | Developed with both contracting and in-house resources | Not available | No | Not available | Not available | No | Yes | Not available | |||||||||||||
| Department Of Energy | LM HQ - Office of Legacy Management (LM) | DOE-674 | Scripting | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Energy & the Environment | Deployed | c) Not high-impact | Not high-impact | This AI use case does not serve as a principal basis for decisions or actions with legal, material, and binding | Natural Language Processing | AI-powered scripting will improve automation, speed, and efficiency. | Reduced cost in alteration of ground water models improving ground water outcomes. | Python code for use in models | 30/01/2025 | Purchased from a vendor | Google Gemini | No | Python code for use in models | Data is held back from the models to validate model outputs. | No | None of the above | Yes | In-Progress | Improved management of the remedy | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Yes | Not applicable | Direct usability testing | ||||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-676 | Microsoft Copilot Pilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | General chatbot knowledge and business productivity capabilities | Generative AI | Quick access to self-generated (Documents, Emails, Files) knowledge and business productivity. | Ability to quickly find and access general knowledge and business productivity | General answers to questions, summarized documentation, document creation, general research. | 30/06/2025 | Purchased from a vendor | Microsoft | No | General answers to questions, summarized documentation, document creation, general research. | Not involved in training | No | No | None of the above | No | N/A | In-Progress | Not applicable | Not applicable | Other | ||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-677 | EES&T Document Processing | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Used for general purpose use cases within EES&T organization | Other | Not available | Cuts manual processing time and improves data accessibility by automating document classification and extraction, increasing operational efficiency. | Information summaries | Developed with both contracting and in-house resources | Not available | No | Information summaries | No | No | Yes | Not available | |||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-678 | Smart CO2 Transport-Route Planning Tool | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Identify potential routes or evaluate existing corridors for carbon transport based on current legislation, best construction practices, and more. | Inform planning and development | Data | Data | ||||||||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-679 | PermitAI | Pilot – The use case has been deployed in a limited test or pilot capacity. | Energy & the Environment | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Agentic AI | Streamline permitting processes with AI-powered environmental review tools. | Speeds up Federal permitting while improving transparency. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | Yes | ||||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-68 | High-Velocity AI: Generative Models | Retired – The use case was reported in the agency's prior year's inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | ||||||||||||||||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-681 | AI for Vendor Compliance | Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Doesn't meet criteria. | Generative AI | Making sure vendors are in compliance with regulations. | More streamlined risk mitigation process with vendors. | Improved consistency in vendor selection and retention | Improved consistency in vendor selection and retention | ||||||||||||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-682 | ServiceNow LANL AI Portal Integration | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | ServiceNow's AI output is NOT a principal driver of a government decision that would meaningfully affects people's rights, safety, critical access, or strategic assets. AI in ServiceNow is strictly used to help manage IT resources for LANL. | Generative AI | Provide GenAI capabilities for use in multiple use cases for the IT divisions at LANL | Improved automation | Recommendation | 01/11/2024 | Developed in house | ServiceNow | Yes | Recommendation | Existing LANL IT ticket data stored in GCC Data Center certified as FedRAMP High. Training servers in same environment. | No | None of the above | No | Yes | Minimal impact as AI is used purely for predictive purposes to aid trained technicians. Humans remain the decision makers whether to use the AI's output. | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Yes, an appropriate appeal process has been established | Other | ||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-683 | Microsoft AI Builder | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Used for general purpose generative AI within a COTS product. | Agentic AI | Microsoft AI Builder solves the problem that most businesses want to use AI but lack the technical expertise and resources to build it themselves. It provides easy-to-use, pre-built AI tools that business users can implement without needing data scie | Microsoft AI Builder democratizes AI development by enabling business users to easily create custom AI models and automation solutions without extensive coding. It integrates with the Microsoft Power Platform to add AI capabilities like document processing and predictions directly into workflows, accelerating digital transformation through accessible, low-code AI solutions that enhance business processes and decision-making. | Microsoft AI Builder outputs structured business data and automated actions, including extracted information from documents, data predictions, and workflow automations that integrate directly into existing business processes and applications. | 20/06/2025 | Developed with both contracting and in-house resources | Microsoft | No | Microsoft AI Builder outputs structured business data and automated actions, including extracted information from documents, data predictions, and workflow automations that integrate directly into existing business processes and applications. | AI Builder models are trained on data that INL provides. This includes: Custom tables created by users; Imported datasets for prediction, form processing, object detection, and classification tasks; Data from Power Apps, Power Automate, and other Power Platform components. Training data remains within our Microsoft environment and tenant. | Not available | Yes | No | Not available | |||||||||||
| Department Of Energy | RL - Hanford - Richland Operations Office - Hanford (EM) | DOE-684 | LexisNexis | Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | This AI system does not provide a principle basis for decisions or actions with legal, material binding, or have any significat effects of those items listed in OMB-25-21 in the definition of High Impact. | Generative AI | searching legal database for case law | Better case law material for legal team | search results to input inquiries | 01/08/2025 | Purchased from a vendor | Nexus | Yes | search results to input inquiries | pretrained ai with access to legal database | Not applicable | No | Not applicable | None of the above | No | Not applicable | In-Progress | Not applicable | potential impacts are still being assessed | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | In-Progress |
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-685 | Microsoft 365 Apps for Enterprise | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-686 | Automated sorting of high repetition rate coherent diffraction data from XFELS | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | 'Coherent X-rays are routinely provided today by the latest Synchrotron and X-ray Free-electron Laser Sources. When these diffract from a crystal containing defects, interference leads to the formation of a modulated diffraction pattern called ''spec | 'Coherent X-rays are routinely provided today by the latest Synchrotron and X-ray Free-electron Laser Sources. When these diffract from a crystal containing defects, interference leads to the formation of a modulated diffraction pattern called ''speckle''. When the defects move around, they can be quantified by a correlation analysis technique called X-ray Photon Correlation Spectroscopy. But the speckles also change when the beam moves on the sample. By scanning the beam in a controlled way, the overlap between the adjacent regions gives redundancy to the data, which allows a solution of the inherent phase problem. This is the basis of the coherent X-ray ptychography method which can achieve image resolutions of 10nm, but only if the probe positions are known. The goal of this proposal will be to separate ''genuine'' fluctuations of a material sample from the inherent beam fluctuations at the high data rates of XFELs. Algorithms will be developed to calculate the correlations between all the coherent diffraction patterns arriving in a time series, then used to separate the two sources of fluctuation using the criterion that the ''natural'' thermal fluctuations do not repeat, while beam ones do. We separate the data stream into image and beam ''modes'' automatically.' | 'Coherent X-rays are routinely provided today by the latest Synchrotron and X-ray Free-electron Laser Sources. When these diffract from a crystal containing defects, interference leads to the formation of a modulated diffraction pattern called ''speckle''. When the defects move around, they can be quantified by a correlation analysis technique called X-ray Photon Correlation Spectroscopy. But the speckles also change when the beam moves on the sample. By scanning the beam in a controlled way, the overlap between the adjacent regions gives redundancy to the data, which allows a solution of the inherent phase problem. This is the basis of the coherent X-ray ptychography method which can achieve image resolutions of 10nm, but only if the probe positions are known. The goal of this proposal will be to separate ''genuine'' fluctuations of a material sample from the inherent beam fluctuations at the high data rates of XFELs. Algorithms will be developed to calculate the correlations between | 'Coherent X-rays are routinely provided today by the latest Synchrotron and X-ray Free-electron Laser Sources. When these diffract from a crystal containing defects, interference leads to the formation of a modulated diffraction pattern called ''speckle''. When the defects move around, they can be quantified by a correlation analysis technique called X-ray Photon Correlation Spectroscopy. But the speckles also change when the beam moves on the sample. By scanning the beam in a controlled way, the overlap between the adjacent regions gives redundancy to the data, which allows a solution of the inherent phase problem. This is the basis of the coherent X-ray ptychography method which can achieve image resolutions of 10nm, but only if the probe positions are known. The goal of this proposal will be to separate ''genuine'' fluctuations of a material sample from the inherent beam fluctuations at the high data rates of XFELs. Algorithms will be developed to calculate the correlations between | ] | |||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-689 | Software Implementation Assistant | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | This AI use case primarily assists in planning and document-related tasks for software implementations and does not significantly affect legal, material, or binding rights or critical access to services. | Agentic AI | This AI solution is designed to streamline the process of software implementation by automating crucial planning tasks, technical documentation, and risk assessments, which typically demand significant time and effort. The solution aims to support ju | The AI solution is expected to enhance the agency's mission by reducing the time, effort, and complexity involved in software implementation efforts, leading to significant cost savings. It will improve project planning, standardize technical documentation, aid junior resources, and allow subject matter experts to focus on critical tasks, ultimately improving project outcomes and resource utilization for the public's benefit. | The AI system outputs detailed project plans, organized tasks, technical documents, use cases, test scripts, and risk assessments specific to the software implementation project and application. | 01/10/2024 | Developed in house | Purchased from a Vendor - Tabnine | The AI system outputs detailed project plans, organized tasks, technical documents, use cases, test scripts, and risk assessments specific to the software implementation project and application. | |||||||||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-69 | Uncertainty Quantification and Instrument Automation to enable next generation cosmological discoveries | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project will develop AI-based tools to enable critical sectors for near-future cosmic applications. Uncertainty quantification is essential for performing discovery science now, and simulation-based inference offers a new approach. The automated | create new methods for uncertainty quantification in AI | ai algorithms | No | ai algorithms | my own simulated data; research datasets from scientific experiments | No | Yes | ||||||||||||||||
| Department Of Energy | NR HQ - Naval Reactors (NR) | DOE-690 | Copilot Studio | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Output of generative AI is administratively prohibited from being used as principal responses in the various scenarios considered high impact. | Generative AI | Copilot Studio will help NNL build custom AI agents that automate repetitive tasks, answer common questions, and streamline workflows across departments, all without requiring deep coding expertise. | Copilot Studio delivers significant benefits by enabling NNL to build custom AI agents that automate routine tasks, enhance decision-making, and improve operational efficiency, all through a low-code interface. | Depending on how the agent is designed, outputs can represent recommendations (e.g., suggesting a action), captured user inputs (like dates, names, or preferences), or contextual responses generated using AI. | Depending on how the agent is designed, outputs can represent recommendations (e.g., suggesting a action), captured user inputs (like dates, names, or preferences), or contextual responses generated using AI. | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-691 | Microsoft Outlook MUI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-692 | HALO AI | Pre-deployment – The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Demonstration of AI agents for dense data interfaces at a specific installation. The impact is limited to a small number of operational functions but could expand if succsessul. | Other | Not available | Increases operational awareness and response speed by automating data analysis, leading to faster, more informed decision-making. | Information summaries | No | Information summaries | No | No | Yes | ||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-693 | COREII | Pilot – The use case has been deployed in a limited test or pilot capacity. | Cybersecurity | Pilot | c) Not high-impact | Not high-impact | This AI system leverages Retreival Augmented Generation (RAG) enabled Generative AI over large corpuses of OT/ICS cybersecurity data and information to support decision making within critical infrastructure operations. | Generative AI | Support decision making for critical infrastructure OT cybersecurity operations. Provides a secure environment where senstive data can be ingested and retreived reliably. Reduces the time it takes to perform threat analysis, supply chain analysis, kn | Making information and knowledge easier to retreive and digest. Signficantly improves analysis time. Maximizes the transfer of knowledge to users of various backgrounds. Translates complex and complicated research results into accessible practical solutions. | Distillation of research results and findings into pratical decision making information. | 01/04/2025 | Developed with both contracting and in-house resources | 1899-12-31 11:59:00 | No | Distillation of research results and findings into pratical decision making information. | The Large Language Model is a pretrained open source model but the Retreival Augemented Generation (RAG) engine retreives from the entire OSTI.gov corpus, Known Exploited Vulnerabilty (KEV) dataset, CyOTE Precursor Analysis Reports, CyOTE Observable Dataset, entire Energy Information Agency EIA.gov dataset, all CISA OT Advisories, and ARC Web Market Analysis Studies (Proprietary). This was a sample of the data that can be added to by future users. | yes | No | Yes | Currently under SDR process and public release in progress. No URL available yet. | |||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-694 | GenAI for Classified Subject Area Categorization | Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Pilot to help derivative classifiers, classification analysts, researchers and others communicating conent externally to quickly determine CSA. | Generative AI | Help users, DCs and classification analysts more quickly determine Classified Subject Areas. | Help users, DCs and classification analysts more quickly determine Classified Subject Areas. | List of recommendations of classified subject areas | 19/09/2025 | Developed with both contracting and in-house resources | Open source development with INL Advanced Analytics Center of Excellence | Yes | List of recommendations of classified subject areas | Classified Subject Area documentation, validated documents | Not available | No | Yes | Not available | |||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-695 | Sindri | Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Agentic AI | Automate code assignment at requisition generation time | Maxmize use of Strategic Agreements and facilitate SCM reporting | Decision | Developed in house | Yes | Decision | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-696 | AI Safety Tool | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | a) High-impact | High-impact | Generative AI | Mitigate workplace ncidents | To reduce workplace incidents and increase employee safety. | Text based warning message | Text based warning message | |||||||||||||||||||||
| Department Of Energy | KCNSC - Kansas City National Security Campus (KCFO) | DOE-697 | Microsoft Co-Pilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Output does not serve as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety. Just productivity enhancement. | Generative AI | Overloaded workers often must perform routine, tedious administrative tasks that waste their valuable time | Eliminate tedious, time consuming adminstrative tasks, reduce time to accomplish routine tasks | Drafted, summarized, or prioritized emails Meetings scheduled at optimal times Generated follow-up emails or meeting notes Reports, presentations, or summaries generatedd from raw data Content rewritten or refined for clarity and tone Extract key insights from long documents Trends derived from spreadsheet analysis Charts and visualizations created from raw data Rrepetitive Excel tasks Automatated Presentation slides generated from bullet points or outlines Presentations with Improved visual appeal Speaker notes and talking points generated | 01/10/2027 | Purchased from a vendor | Microsoft | Yes | Drafted, summarized, or prioritized emails Meetings scheduled at optimal times Generated follow-up emails or meeting notes Reports, presentations, or summaries generatedd from raw data Content rewritten or refined for clarity and tone Extract key insights from long documents Trends derived from spreadsheet analysis Charts and visualizations created from raw data Rrepetitive Excel tasks Automatated Presentation slides generated from bullet points or outlines Presentations with Improved visual appeal Speaker notes and talking points generated | Model trained by Microsoft that was publically available | No | None of the above | No | ||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-698 | Microsoft Edge | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-699 | Anthropic Claude | Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Used for general purpose generative AI within a COTS product. | Generative AI | Generative AI tools like this solve the problem of time-consuming knowledge work by instantly providing expert-level assistance with writing, coding, research, and analysis. They automate repetitive tasks that require human expertise, allowing people | Boost employee productivity by helping to create content, assist in coding, and complex problem-solving tasks, while democratizing access to sophisticated capabilities like writing and design. This technology promises to accelerate innovation by serving as an intelligent collaborator, freeing humans to focus on higher-level strategic and creative work | The outputs of this tool is human-like text, code, and creative content that appears to be written by an expert. These tools produce coherent paragraphs, functional programming code, detailed analyses, creative stories, technical documentation, and conversational responses that are contextually relevant and professionally structured. | 03/07/2025 | Developed with both contracting and in-house resources | Anthropic | No | The outputs of this tool is human-like text, code, and creative content that appears to be written by an expert. These tools produce coherent paragraphs, functional programming code, detailed analyses, creative stories, technical documentation, and conversational responses that are contextually relevant and professionally structured. | At this time, INL plans to use non-CUI data with this solution. Claude's training data consists of a diverse mix of text from books, articles, websites, and other publicly available written content up to its knowledge cutoff date (January 2025) | Not available | No | No | Not available | |||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-70 | READS: Real-time Edge AI for Distributed Systems | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project will develop and deploy low-latency controls and prediction algorithms at the Fermilab accelerator complex | READS has two sub-projects. The first project created the means to stream live Main Injector and Recycler accelerator beam loss monitor data. This data is then fed to an AI model deployed on an FPGA so that it can infer, in realtime, the origin of beam loss, either Main Injector or Recycler, for each beam loss monitor in the tunnel enclosure. The second project aimed to improve upon traditional resonant beam extraction regulation techiniques using AI for use in the Fermilab Delivery Ring and Mu2e. | The ML outputs of the system are inferences as to the origin of beam loss in the Main Injector acclerator enclosure and also suggested regulation ramps to best improve the Spill Duty Factor in the Delivery Ring for Mu2e | 25/09/2025 | Developed in house | No | The ML outputs of the system are inferences as to the origin of beam loss in the Main Injector acclerator enclosure and also suggested regulation ramps to best improve the Spill Duty Factor in the Delivery Ring for Mu2e | research datasets from scientific experiments | No | Yes | unknown | |||||||||||||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-700 | Amazon Q Chatbot Pilot | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Not being used to make critical runtime decisions. | Generative AI | Quick access to AWS cloud engineering reference architectures and general AWS services information | Ability to quickly find and access AWS architecture and services information | Specific AWS architecture and services use-case responses to questions | 30/06/2025 | Developed with both contracting and in-house resources | AWS | No | Specific AWS architecture and services use-case responses to questions | Not involved in training | No | No | None of the above | No | N/A | In-Progress | Not applicable | Not applicable | Other | ||||||
| Department Of Energy | NR HQ - Naval Reactors (NR) | DOE-701 | Agentic AI for cybersecurity change request reviews | Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | c) Not high-impact | Not high-impact | Output of generative AI is administratively prohibited from being used as principal responses in the various scenarios considered high impact. | Agentic AI | Use AI Agents for cybersecurity review of documents and structured data to improve processes and to help identify areas of risk. | AI agents automate routine tasks, enhance decision-making, and improve operational efficiency. In addition, the AI Agent for cybersecurity will allow for a customized approach to tie specific requirement documents with structured data from a database. | Depending on how the final product is designed, outputs can represent recommendations (e.g., suggesting a action), captured user inputs (like dates, names, or preferences), or contextual responses generated using AI. | Depending on how the final product is designed, outputs can represent recommendations (e.g., suggesting a action), captured user inputs (like dates, names, or preferences), or contextual responses generated using AI. | ||||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-702 | HPC OpenAI-Compatible API | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Provides an LLM API to the entire laboratory free of charge | Other | Not available | Enhances scientific productivity by providing OpenAI-compatible APIs to the laboratory, allowing for rapid development of software and small code modifications for larger model usage. | Foundational endpoint for utilization by other applications | 08/09/2023 | Developed with both contracting and in-house resources | Open Source | Yes | Foundational endpoint for utilization by other applications | No | Yes | Yes | Not available | ||||||||||||
| Department Of Energy | LLNL - Lawrence Livermore National Laboratory (LFO) | DOE-703 | Anthropic Claude Pilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | General chatbot knowledge and capabilities | Generative AI | Quick access to general knowledge. | Ability to quickly find and access general knowledge and business productivity | General answers to questions, summarized documentation, document creation, general research. | 30/06/2025 | Purchased from a vendor | Anthropic | No | General answers to questions, summarized documentation, document creation, general research. | Not involved in training | No | No | None of the above | No | N/A | In-Progress | Not applicable | Not applicable | Other | ||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-704 | Ask HR | Pre-deployment – The use case is in a development or acquisition status. | Human Resources | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | GenAI-RAG on HR content | Productivity Tool | Text | 01/10/2024 | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-706 | AI-Enabled Tech Desk Agent | Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Agentic AI | Allow staff to resolve tech issues through an AI-enabled self-service chat agent. | Improves staff support with faster, AI-driven issue resolution. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | Yes | ||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-707 | mass3 | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | GenAI on STEM-optimized LLMs | Productivity Tool | Text | 01/09/2025 | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | NR HQ - Naval Reactors (NR) | DOE-708 | Microsoft 365 Copilot | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Output of generative AI is administratively prohibited from being used as principal responses in the various scenarios considered high impact. | Generative AI | Modern enterprise search among Microsoft 365 content, AI integration into daily business applications, improvemed management of email and messaging. | Increased efficiency for routine tasks using Microsoft 365 applications | Text and file output in Microsoft 365 applications. Varies based on user prompts. | Text and file output in Microsoft 365 applications. Varies based on user prompts. | ||||||||||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-709 | AI-Form and Questionnaire Assistant | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | This AI use case does not significantly affect legal, material, or binding rights or critical access to services. It is focused on enhancing the completion of standard forms and questionnaires, and does not involve decisions with legal, material, bin | Generative AI | "To automate and enhance the completion of standard forms and questionnaires used onsite at SRNS. This AI aims to reduce time, effort, errors, and enhance context understanding, consistent formatting, and independent outputs from the system." | Reduced development time, reduced errors and rework, better context understanding, and improved standardization and formatting, reducing time spent on forms by half. | "The AI system outputs draft forms and questionnaires which need to be reviewed and corrected by users." | 01/10/2024 | Developed in house | SRNS - OT In House Staff | "The AI system outputs draft forms and questionnaires which need to be reviewed and corrected by users." | |||||||||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-71 | Simulation-based inference for cosmology | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This project will develop and use simulation-based inference to estimate cosmological parameters related to cosmic acceleration in the early and late universe — via the cosmic microwave background and strong gravitational lensing, respectively. This | DOE ECA award. apply SBI to strong lensing and cmb to infer cosmological parameters | prediction of numerical values of cosmology | No | prediction of numerical values of cosmology | research datasets from scientific experiments | No | Yes | ||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-710 | SQL Server Management Studio | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | 01/02/2023 | Purchased from a vendor | Microsoft | No | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-711 | Microsoft Exchange Server | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-712 | Collaborative Machine learning platform for Scientific Discovery | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | New advances in scientific applied machine learning (ML) offer an opportunity to leverage the commonalities, scientific insights and collected experience of the larger scientific user facility community across different experiments and facilities. T | New advances in scientific applied machine learning (ML) offer an opportunity to leverage the commonalities, scientific insights and collected experience of the larger scientific user facility community across different experiments and facilities. The result will be a shared platform that lowers the barrier to enter by leveraging the advances in machine learning methods across user facilities thus empowering domain scientists and data scientists to discover new science using existing and new data with new tools | New advances in scientific applied machine learning (ML) offer an opportunity to leverage the commonalities, scientific insights and collected experience of the larger scientific user facility community across different experiments and facilities. The result will be a shared platform that lowers the barrier to enter by leveraging the advances in machine learning methods across user facilities thus empowering domain scientists and data scientists to discover new science using existing and new data with new tools | New advances in scientific applied machine learning (ML) offer an opportunity to leverage the commonalities, scientific insights and collected experience of the larger scientific user facility community across different experiments and facilities. The result will be a shared platform that lowers the barrier to enter by leveraging the advances in machine learning methods across user facilities thus empowering domain scientists and data scientists to discover new science using existing and new data with new tools | ||||||||||||||||||||
| Department Of Energy | SRS - SRNL - Savannah River Site - Savannah River National Laboratory (EM) | DOE-713 | Performance Monitoring at the Salt Waste Processing Facility (SWPF) | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | b) Presumed high-impact, but determined not high-impact | Not high-impact | In development stage and impact has not been determined | Classical/Predictive Machine Learning | With data from SWPF instrumentation, train neural networks to consider process parameters and chemical speciation data from incoming salt batches to predict filtration rate performance. | Proactive monitoring of complex facility processes with only select instrumentation data to guide prediction process operations | Textual output; processing parameters | Textual output; processing parameters | ||||||||||||||||||||
| Department Of Energy | SRS - SRNL - Savannah River Site - Savannah River National Laboratory (EM) | DOE-714 | Advanced Long Term Environmental Monitoring Systems (ALTEMIS) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Energy & the Environment | Pilot | a) High-impact | High-impact | Classical/Predictive Machine Learning | Reduce the cost of long-term monitoring using integrated sensing technologies and AI/ML to forecast groundwater plume migration and anomalies. | Proactive, rather than reactive, monitoring of complex geochemical systems. | Spatiotemporal optimization of sensor locations, correlate proxy variables (e.g., pH, specific conductance, water table elevation, etc.) with contaminants, measure proxy variables with various sensing modalities, predict concentrations across space and time given proxy variables. | 01/09/2022 | Developed in house | Python | Yes | Spatiotemporal optimization of sensor locations, correlate proxy variables (e.g., pH, specific conductance, water table elevation, etc.) with contaminants, measure proxy variables with various sensing modalities, predict concentrations across space and time given proxy variables. | Sensor systems: In Situ (vendor name) well sensors, electrical resistivity tomography system, custom vertically resolved temperature sensors | altemisai.org | No | Yes | |||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-715 | Custom GenAI for ATR Fuel Conversion Project (FuelGPT) | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | AI Pilot specific to one project related to HEU to LEU fuel conversion. | Generative AI | Engineers spending extensive time manually iterating through documents during the fuel conversion project. | Engineers save time by using the AI to quickly find answers and relevant source documents. | Engineers save time by using the AI to quickly find answers and relevant source documents. | 04/08/2025 | Developed with both contracting and in-house resources | Open source development | Yes | Engineers save time by using the AI to quickly find answers and relevant source documents. | HEU to LEU fuel conversion project documents | Not available | No | Yes | Not available | |||||||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-716 | Denoising Diffusion to Accelerate Detector Simulation | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This program aims to develop generative models for quickly simulating showers of particles in calorimeters for LHC experiments | This effort is exploring generative AI to replace costly detector simulation. This would enable faster, more accurate simulation, accelerating and enhancing scientific results and allowing easier use of GPU coprocessors at HPC centers. | The AI system outputs simulated detector hits (energy deposits) in one or more subdetectors of the particle physics experiment. | 25/09/2025 | Developed in house | No | The AI system outputs simulated detector hits (energy deposits) in one or more subdetectors of the particle physics experiment. | research datasets from scientific experiments | No | Yes | unknown | |||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-717 | Tackling Solid-State Electrochemical Interfaces from Structure to Function Utilizing HPC and Machine Learning Tools | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Applying AI/ML methods to discover better solid state battery interface material using HPC | Applying AI/ML methods to discover better solid state battery interface material using HPC | Applying AI/ML methods to discover better solid state battery interface material using HPC | Applying AI/ML methods to discover better solid state battery interface material using HPC | ||||||||||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-718 | 5G-enabled Reliable and Decentralized IoT Framework with Blockchain | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | We propose to develop an end-to-end, 5G-enabled, reliable, and decentralized IoT framework that improves data collection and communication among edge computing devices for science applications. | We propose to develop an end-to-end, 5G-enabled, reliable, and decentralized IoT framework that improves data collection and communication among edge computing devices for science applications. | We propose to develop an end-to-end, 5G-enabled, reliable, and decentralized IoT framework that improves data collection and communication among edge computing devices for science applications. | We propose to develop an end-to-end, 5G-enabled, reliable, and decentralized IoT framework that improves data collection and communication among edge computing devices for science applications. | ||||||||||||||||||||
| Department Of Energy | LANL - Los Alamos National Lab (LAFO) | DOE-719 | ServiceNow AI Search | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | ServiceNow's AI output is NOT a principal driver of a government decision that would meaningfully affects people's rights, safety, critical access, or strategic assets. AI in ServiceNow is strictly used to help manage IT resources for LANL. | Natural Language Processing | Provide better search experience for finding knowledge articles, tickets, and other data. Ability to pull from sources external to ServiceNow | Improved automation | Recommendation | 01/11/2024 | Purchased from a vendor | ServiceNow | Yes | Recommendation | Existing LANL IT ticket data stored in GCC Data Center certified as FedRAMP High. Training servers in same environment. | No | None of the above | No | Yes | Minimal impact as AI is used purely for predictive purposes to aid trained technicians. Humans remain the decision makers whether to use the AI's output. | Yes – by another appropriate agency office or reviewer not directly involved in the AI's development | Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | Yes | Yes, an appropriate appeal process has been established | Other | ||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-72 | Extreme data reduction for the edge | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | This projects develops AI algorithms and tools for near-sensor data reduction in custom hardware. | AI tools are developed for embedded inference in real-time processing systems for scientific experiments. This can accelerate sientific discovery and time to science thus enabling large cost savings and DOE scientific prestige. | It can be an AI algorithm from prediction to data compression to control (decision making). | No | It can be an AI algorithm from prediction to data compression to control (decision making). | research datasets from scientific experiments | No | Yes | ||||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-720 | Hub Biography Builder | Pilot – The use case has been deployed in a limited test or pilot capacity. | Human Resources | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Automatically generate draft professional bios from resumes, CVs, and other inputs. | Saves time and improves quality of professional staff bios. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/07/2025 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | Yes | ||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-721 | Microsoft Skype App | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | 01/02/2023 | Purchased from a vendor | Microsoft | No | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-722 | Cloud Knowledge Hub | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Answer staff questions on cloud services and best practices via an AI knowledge hub. | Expands staff expertise and accelerates cloud adoption. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | ||||||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-723 | Crickets | Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Classical/Predictive Machine Learning | Facilitate analysis and user intent on detect of access to potentially inappropriate web content | Reduce analysis cycle and response times | Prediction | 01/10/2024 | Developed in house | Yes | Prediction | None | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-724 | Neptune | Pre-deployment – The use case is in a development or acquisition status. | International Affairs | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Other | Content analysis across domains and structured/unstructured content | Productivity Tool | Text | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | |||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-725 | AccessAI | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | End User Productivity Tool | Generative AI | GenAI-RAG on corporate content | Productivity Tool | Text | 01/10/2024 | Developed in house | Yes | Text | No | None of the above | No | In-Progress | No impacts | Yes, sufficient and periodic training has been established | Not applicable | Not applicable | Direct usability testing | ||||||||
| Department Of Energy | SRS - ESB - Savannah River Site - Enterprise System Boundary (SRFO) | DOE-726 | Machine Learning to support Operational Efficiency | Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | The AI use case focuses on capturing operational metrics and institutional knowledge and does not significantly affect legal, material, or critical access to services rights. | Classical/Predictive Machine Learning | Reducing risk of losing institutional knowledge associated with an aging workforce. Ensure time to proficiency for junior operators is minimized through storing and accessing institutional knowledge to understand how operational efficiency can be im | Less unanticipated downtime, improved decision making and improved manufacturing processes with optimized output. | The system will evaluate process output, and seek to provide recommendations to realize the desired output along with the associated logic for why the change will yield the projected output. | 01/10/2024 | Developed in house | SRNS - OT In House Staff | The system will evaluate process output, and seek to provide recommendations to realize the desired output along with the associated logic for why the change will yield the projected output. | |||||||||||||||||
| Department Of Energy | NA-IM (Enterprise) - Office of the Associate Administrator for Information Management and Chief Information Officer (HQ-Enterprise) (NNSA HQ) | DOE-727 | DNA-P Use Cases Leaverging Artificial Intellegence (Deployed) | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | This use case is determined to be not high-impact base on definitoin in Section 5 of the OMB Memorandum M-25-21. | Other | "- identify data clusters/trend analysis - identify data discrepencies/data enrichment - generate suggestions (including generating reports, data linkages, and courses of action) - generate graphical and natural language analyses" | '-save time for DOE DNA-P users - improve DOE/NNSA data quality - improve DOE/NNSA safety operations | "- data clusters - summaries - recommendations (data linkages, data entries, and courses of action) - graphical and natural language analyses - extracted entities -responses to queries via RAG workflows" | 01/04/2024 | Developed in house | Palantir | Yes | "- data clusters - summaries - recommendations (data linkages, data entries, and courses of action) - graphical and natural language analyses - extracted entities -responses to queries via RAG workflows" | '- No custom models developed - AI use cases have been deployed on publicly available information as well as agency provided data | No | PIA not publically available | None of the above | No | PIA not publically available | ||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-728 | Create Statement of Work (SOW) for Procurement | Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Ensure procurement SOWs meet standards through guided AI review and alignment checks. | Ensures compliance and quality in procurement processes. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/05/2025 | Developed in house | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | Yes | ||||||||||||
| Department Of Energy | PPPL - Princeton Plasma Physics Laboratory (SC43 OIM) | DOE-729 | AI workflow to process voicemail user service requests and translate them into actionable IT service tickets | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | AI workflow to process voicemail user service requests and translate them into actionable IT service tickets | Natural Language Processing | Faster user support and an additional chain of communication for users to report issues. More organized and consistent information extraction from a message. | Faster user support and an additional chain of communication for users to report issues. More organized and consistent information extraction from a message. | AI workflow to process voicemail user service requests and translate them into actionable IT service tickets | 01/09/2025 | Developed in house | No | AI workflow to process voicemail user service requests and translate them into actionable IT service tickets | No specific training data, it's using generally available LLM(s) | No | No | Google Gemini | In-Progress | Reduction of admin burden. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | ||||||
| Department Of Energy | FNAL - Fermi National Accelerator (SC43 OIM) | DOE-73 | Machine Learning for Accelerator Operations Using Big Data Analytics / L-CAPE | Pilot – The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | The use case does not have an effect on civil rights/liberties/privacy, access to education/housing/insurance/credit/employment, access to critical government resources/services, human health/safety, critical infrastructure/public safety | Classical/Predictive Machine Learning | Big data analytics for anomaly prediction and classification, enabling automatic mitigation, operational savings, and predictive maintenance of the Fermilab LINAC | ML models are deployed for the FNAL's Linac.to detect, label andact upon faults. The usage of ML will jimporve our fault labelingand detection. This will allow for improved operatioal efficeincy, fault statistics, and preventitive maintenance. To my knowledge this is the first global accelerator operations ML system. | The ML outputs to a dashboard withfault labels and downtime predictiojns.The model will also try and predict dwwontime and possible actions. | 25/09/2025 | Developed in house | No | The ML outputs to a dashboard withfault labels and downtime predictiojns.The model will also try and predict dwwontime and possible actions. | my own simulated data; research datasets from scientific experiments | No | Yes | unknown | |||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-730 | AI/ML in Particle Accelerator Controls System | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Improving the safety and performance of particle accelerator operations through artificial intelligence assisted control systems. | Improving the safety and performance of particle accelerator operations through artificial intelligence assisted control systems. | Improving the safety and performance of particle accelerator operations through artificial intelligence assisted control systems. | Improving the safety and performance of particle accelerator operations through artificial intelligence assisted control systems. | ||||||||||||||||||||
| Department Of Energy | EE HQ - EE Headquarters (EE) | DOE-731 | Expanding Consumer Participation in Consumer Electronics Recycling Programs Utilizing Targeted Marketing Campaigns | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Provides advisory information only. AI use will be used once to compile laws and regulations and will not be ongoing use. | Natural Language Processing | Identify laws and regulations related to e-waste | Consolidate and provide a more convenient venue to find laws and regulations related to e-waste disposal and recycling | Text describing laws and regulations on e-waste disposal and recycling and links to support documentation | Text describing laws and regulations on e-waste disposal and recycling and links to support documentation | No | |||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-732 | Chatlab | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Used for general purose generative AI within a COTS product. | Generative AI | Not available | Not available | Not available | Developed with both contracting and in-house resources | Not available | No | Not available | Not available | Not available | No | Yes | Not available | ||||||||||||
| Department Of Energy | OREM - ORCC - Oak Ridge EM - Oak Ridge Cleanup Contract (EM) | DOE-734 | CoPilot | Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Not full deployment; testing only in preparation for forced Microsoft roll out in October 2025 | Other | Communication suggestions | better communication | provides intelligent suggestions, and boosts productivity | 19/08/2025 | Purchased from a vendor | Microsoft | Yes | provides intelligent suggestions, and boosts productivity | No | No | In-Progress | TBS | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | ||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-735 | Microsoft OneNote MUI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | SRS - SRNL - Savannah River Site - Savannah River National Laboratory (EM) | DOE-736 | Identifying Controlling Variables for Mercury Vapor Release at Y-12's Alpha-4 | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | Correlate indoor/outdoor meteorological conditions with mercury vapor releases such that PEL exceedances can be forecasted, improving respiratory worker safety and enhancing work planning | Intraday forecast of mercury concentration in buildings given past chronology of meteorological conditions | Prediction of elevated mercury concentrations in the building | Prediction of elevated mercury concentrations in the building | |||||||||||||||||||||
| Department Of Energy | EE HQ - EE Headquarters (EE) | DOE-738 | FY19 Lab Call – Livewire Data Sharing Platform | Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Exploring different methods to aid data discovery and providing data information within the platform | Generative AI | Lack of easily-accessible, organized information on transportation and mobility-related projects. | Enable time savings and quick answers to queries | GenAI chatbot feature providing info on datasets in the platform or FAQs on how to use the platform | GenAI chatbot feature providing info on datasets in the platform or FAQs on how to use the platform | No | |||||||||||||||||||
| Department Of Energy | PNNL - Pacific Northwest National Laboratory (SC43 OIM) | DOE-739 | Microsoft Copilot Studio | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not meet any requiremnts for high Impact use case | Generative AI | Build enterprise conversational AI agents that securely connect to data and workflows. | Enables secure, scalable automation of enterprise workflows. | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | 01/10/2024 | Purchased from a vendor | Microsoft | Yes | Produce high-quality, contextually relevant, and coherent responses or content based on user input. | No | No | No | None of the above | No | |||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-74 | Geo Threat Observable for structure cyber threat related to the energy sector | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Not available | Other | Not available | Correlations, recommendations and predictions for improved cyber response. | Collected stored in graphdb and used in machine learning for similarities of threat enabling | Developed with both contracting and in-house resources | Not available | No | Collected stored in graphdb and used in machine learning for similarities of threat enabling | Open source threat intelligence collected, NLP used to scrape information off of cyber incident reports and websites, some data from cyber sensors, threat feeds and some data from manual threat analysis activities. | Not available | No | Yes | Not available | ||||||||||||
| Department Of Energy | ORNL - Oak Ridge National Laboratory (SC43 OIM) | DOE-742 | AI for Financial Analysis | Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Doesn't meet criteria. | Classical/Predictive Machine Learning | Productivity enhancement for financial activities | Enhance automation of data processing for financial professionals. | Enhanced financial process workflows | Enhanced financial process workflows | ||||||||||||||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-744 | Microsoft PowerPoint MUI | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-745 | AI/ML to design and optimize materials and their properties | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | Design and optimize materials and their properties for Quantum Information Science and clean energy using AI/ML | Design and optimize materials and their properties for Quantum Information Science and clean energy using AI/ML | Design and optimize materials and their properties for Quantum Information Science and clean energy using AI/ML | Design and optimize materials and their properties for Quantum Information Science and clean energy using AI/ML | ||||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-746 | EES&T Communications Impact | Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Used for general purpose uses cases within EES&T organization | Other | Not available | Provides actionable insights into communication effectiveness, enabling data-driven improvements in outreach and stakeholder engagement. | Information summaries | Developed with both contracting and in-house resources | Not available | No | Information summaries | No | No | Yes | Not available | |||||||||||||
| Department Of Energy | NA-IM (Enterprise) - Office of the Associate Administrator for Information Management and Chief Information Officer (HQ-Enterprise) (NNSA HQ) | DOE-747 | NA-CI Salesforce | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | This use case is determined to be not high-impact base on definitoin in Section 5 of the OMB Memorandum M-25-21. | Other | Solves inefficiency and inconsistency of manual data capture of stakeholder activities. | Improve efficiency, data quality, and visibility. | Synced records of emails, calendar events, and contacts from Outlook into Salesforce. | 16/09/2025 | Purchased from a vendor | Salesforce | Yes | Synced records of emails, calendar events, and contacts from Outlook into Salesforce. | N/A. NA-CIs data is not used for model training or fine-tuning. This is only processed for synchronization wihtin the secure GovCloud Plus environment. | Not Publicly Available | Yes | None of the above | Yes | No open source code | In-Progress | |||||||||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-748 | Microsoft Power BI Desktop | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | EMLA - EM Los Alamos Field Site (EM) | DOE-749 | Microsoft Azure PowerShell | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | AI integrated into the software is solely used to provide a user with assistive functions. | Generative AI | AI was automatically integrated into the product without an identified benefit to the organization. | N/A | 01/02/2023 | Purchased from a vendor | Microsoft | No | N/A | Proprietary/unknown data set used for model training. | Not Applicable | No | Not Applicable | No | Not Available | In-Progress | Not Applicable | Unknown, see "Comments" field. | In-Progress | Development of monitoring protocols is in-progress | Establishment of sufficient and periodic training is in-progress | Not applicable | Not applicable | General solicitations of comments from the public | ||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-75 | Road Conditions from IBM Watson for INL | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Deemed not genAI per last year's submission. NA | Other | Not available | Not available | Not available | Developed with both contracting and in-house resources | Not available | No | Not available | Not available | Not available | No | No | Not available | ||||||||||||
| Department Of Energy | BNL - Brookhaven National Laboratory (SC43 OIM) | DOE-750 | Intelligent Acquisition and Reconstruction for Hyper-Spectral Tomography Systems | Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not meet definition | Classical/Predictive Machine Learning | We will develop artificial intelligence (AI) and machine learning (ML) algorithms to enable dramatic improvements in the throughput and performance of hyperspectral (i.e., multiple energies) computed tomography (HSCT) beamlines at DOE BES Scientific | We will develop artificial intelligence (AI) and machine learning (ML) algorithms to enable dramatic improvements in the throughput and performance of hyperspectral (i.e., multiple energies) computed tomography (HSCT) beamlines at DOE BES Scientific User Facilities (SUFs). We will demonstrate the utility of our algorithms by carefully designing experiments for energy materials at HSCT beamlines available at the Spallation Neutron Source (SNS) and the National Synchrotron Light Source II (NSLS-II). We will also develop AI driven data acquisition algorithms that will optimize the scanning strategy on-the-fly, in order to obtain the fewest yet most informative set of measurements (i.e. reducing beam time and/or number of projections in a data set). Our team will provide ML based reconstruction algorithms that can produce high quality reconstructions from incomplete, sparse and low signal-to-noise ratio datasets enabling real-time feedback and ensuring best possible reconstruction on completion of the experiment. Finally, our efforts will be available to the user community at both facilities via a general user interface. | We will develop artificial intelligence (AI) and machine learning (ML) algorithms to enable dramatic improvements in the throughput and performance of hyperspectral (i.e., multiple energies) computed tomography (HSCT) beamlines at DOE BES Scientific User Facilities (SUFs). We will demonstrate the utility of our algorithms by carefully designing experiments for energy materials at HSCT beamlines available at the Spallation Neutron Source (SNS) and the National Synchrotron Light Source II (NSLS-II). We will also develop AI driven data acquisition algorithms that will optimize the scanning strategy on-the-fly, in order to obtain the fewest yet most informative set of measurements (i.e. reducing beam time and/or number of projections in a data set). Our team will provide ML based reconstruction algorithms that can produce high quality reconstructions from incomplete, sparse and low signal-to-noise ratio datasets enabling real-time feedback and ensuring best possible reconstruction on co | We will develop artificial intelligence (AI) and machine learning (ML) algorithms to enable dramatic improvements in the throughput and performance of hyperspectral (i.e., multiple energies) computed tomography (HSCT) beamlines at DOE BES Scientific User Facilities (SUFs). We will demonstrate the utility of our algorithms by carefully designing experiments for energy materials at HSCT beamlines available at the Spallation Neutron Source (SNS) and the National Synchrotron Light Source II (NSLS-II). We will also develop AI driven data acquisition algorithms that will optimize the scanning strategy on-the-fly, in order to obtain the fewest yet most informative set of measurements (i.e. reducing beam time and/or number of projections in a data set). Our team will provide ML based reconstruction algorithms that can produce high quality reconstructions from incomplete, sparse and low signal-to-noise ratio datasets enabling real-time feedback and ensuring best possible reconstruction on co | ||||||||||||||||||||
| Department Of Energy | NE INL - NE Idaho National Laboratory (NE) | DOE-76 | Deep Learning Malware Analysis for reusable cyber defenses. | Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Not available | Other | Not available | Identify commonalities in malware | Deep Learning Malware Analysis for reusable cyber defenses. | Developed with both contracting and in-house resources | Not available | No | Deep Learning Malware Analysis for reusable cyber defenses. | Data for malware binaries come mainly from open source malware repositories collected; @DisCo application dissassembles and stores into a graph db for management and vector embedded queries to identify common malware functions useful for cyber defenses. | Not available | No | Yes | Not available | ||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-93 | To drive insights on the power system reliability, cost, and operations during the energy transition with and without FECM technologies | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | To drive insights on the power system reliability, cost, and operations during the energy transition with and without FECM technologies | Generate predictive scenarios | Predictive scenarios | Predictive scenarios | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-94 | To drive insights on the dependencies between the natural gas and electricity sectors to increase reliability of the NG system | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | To drive insights on the dependencies between the natural gas and electricity sectors to increase reliability of the NG system | Generate predictive scenarios | Predictive scenarios | Predictive scenarios | ||||||||||||||||||||
| Department Of Energy | NETL - National Energy Technology Laboratory (FECM) | DOE-98 | Data platform to expedite access and reuse of carbon ore data for materials, manufacturing and research | Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Limited to a specific area of research | Classical/Predictive Machine Learning | Data platform to expedite access and reuse of carbon ore data for materials, manufacturing and research | Expedite access and reuse of carbon ore data | Data | Data | ||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Design Your Facility | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Discover Financial Business Intelligence System (FBIS) Report Analysis (Sub-CAN Line Items) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Procurement & Financial Management | Pilot | c) Not high-impact | Not high-impact | Agentic AI | How can spend plans be more real-time and less manual to create and keep up to date? The data used to create and monitor spend plans are spread across multiple out-of-the-box reports available in FBIS. Combing through them is time-intensive, making it more difficult to keep spend plans up to date in real-time. This task is especially challenging due to the unstandardized nature of sub-CAN (Congressional Appropriation Number) line item descriptions. | More efficient spend plan creation and monitoring. This AI-enabled tool provides a resource for ACF budget managers to identify related line items across disparate reports in FBIS. | Suggested budget line items related to a user-provided description. AI is used to both search for relevant data (CANs, categories, and sub-CAN line item descriptions) and aggregate information (e.g. supplier name, document number, total obligations, and user-provided projected costs) to create an up-to-date, real-time spend plan. | 25/04/2026 | c) Developed with both contracting and in-house resources | Palantir | Yes | Suggested budget line items related to a user-provided description. AI is used to both search for relevant data (CANs, categories, and sub-CAN line item descriptions) and aggregate information (e.g. supplier name, document number, total obligations, and user-provided projected costs) to create an up-to-date, real-time spend plan. | RAG implementation using commercially-available LLMs and data from FBIS | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Grant Spend Health Analysis | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Structuring and Validating Completeness of Case Data Information | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Government Benefits Processing | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The use of AI is narrowly focused on extracting key data points from scanned notices. The outputs do not serve as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the 6 cases outlined in M-25-21, page 19. | Agentic AI | How can referrals from the Department of Homeland Security (DHS) be more quickly reviewed for critical pieces of information about their parents and the reason for separation? Since November 2024, when a minor is separated from their parent or legal guardian, the U.S. Customs and Border Protection (CBP) within DHS is required to send certain pieces of information about the parent/legal guardian in accordance with the Ms L vs ICE settlement. CBP send this information in a block of free text and sometimes do not include the required information. Historically, ACF's Office of Refugee Resettlement (ORR)s tracking of required information has been done manually and inconsistently. | More easily searchable and accurate data on family separations Quicker validation of CBP compliance in essential data sharing for separated families, enabling faster follow-up as needed to receive any missing data points | Structured data asset of the critical data points needed about a separation case. AI is used to conduct initial parsing of the data provided by CBP and highlight whether or not the required fields from the Ms L vs. ICE settlement are included and can therefore be updated into the child's profile in ORR's data system. ORR's Intakes Team does final review and in cases where data appears to be missing, the ORR Intakes Team reaches back out to CBP for that information. | 24/12/2026 | c) Developed with both contracting and in-house resources | Palantir | Yes | Structured data asset of the critical data points needed about a separation case. AI is used to conduct initial parsing of the data provided by CBP and highlight whether or not the required fields from the Ms L vs. ICE settlement are included and can therefore be updated into the child's profile in ORR's data system. ORR's Intakes Team does final review and in cases where data appears to be missing, the ORR Intakes Team reaches back out to CBP for that information. | No training or fine-tuning; we are using secure commercially available LLMs. | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | |||||||||||
| Department Of Health And Human Services | HHS/ACF | Structuring Notice of Concern Data | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Government Benefits Processing | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The use of AI is narrowly focused on extracting key data points from unstructured narratives and validating data completeness. The outputs do not serve as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the 6 cases outlined in M-25-21, page 19. | Computer Vision | How can the Office of Refugee Resettlement (ORR) clear its backlog of notices of concern (NOC) and minimize backlog in the future? Notice of Concern (NOC) forms contain critical information regarding safety of children who have left ORR's care. Some forms are received as scans, with the information not in machine-readable format. ORR receives hundreds of NOCs a day. Due to personnel shortage in the Prevention of Child Abuse and Neglect Team (PCAN) team responsible for reviewing and acting on NOCs, as of October 2024 there was a backlog of over 30,000 NOCs. | More effective and efficient review of NOCs With AI-enabled structuring of data in NOCs received in scanned formats, ORR can reduce the large backlog that has accumulated. | Structured data parsed from the subset of NOCs that are scans of documents AI is not used to triage NOCs, just to parse information from scanned documents. The parsed information is presented to the PCAN team alongside the original document for review and action. | 24/12/2026 | c) Developed with both contracting and in-house resources | Palantir | Yes | Structured data parsed from the subset of NOCs that are scans of documents AI is not used to triage NOCs, just to parse information from scanned documents. The parsed information is presented to the PCAN team alongside the original document for review and action. | No training or fine-tuning; we are using secure commercially available LLMs. | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | |||||||||||
| Department Of Health And Human Services | HHS/ACF | Unaccompanied Children Program Policy & Procedure Research Tool | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Service Delivery | Pilot | c) Not high-impact | Not high-impact | Agentic AI | How can the Office of Refugee Resettlement (ORR) research the laws, standards, policies, and procedures applicable to monitoring visits more quickly while maintaining thoroughness? The Office of Refugee Resettlement (ORR) conducts monitoring visits at least monthly to ensure that care providers meet minimum standards for the care and timely release of unaccompanied children, and that they abide by all Federal and State laws and regulations, licensing and accreditation standards, ORR policies and procedures, and child welfare standards. If ORR monitoring finds a care provider to be out of compliance with requirements, ORR issues corrective action findings and requires the care provider to resolve the issue within a specified time frame. Compliance determination involves research into the various laws, standards, policies, and procedures. | Faster issuance of well-informed corrective action findings The goal of the UC Program Policy & Research Tool is to speed up this process as children's health and well-being may be impacted before a corrective action finding is issued and the issue is resolved. The UC Program Policy & Procedure Research Tool speeds up research of relevant laws, standards, policies, and procedures content curated and approved by ORR's policy team. This research is one part of the process that informs ORR's monitoring team's decisions on whether corrective actions are needed and if so, what corrective actions. | Initial assessment of whether a care provider is in compliance with applicable laws, standards, policies, and procedures applicable to the care of unaccompanied children, with an explanation of evidence pulled from monitoring visit reports and the policy documents and accurate citations. AI is not used to suggest corrective actions but rather support determination of whether care providers are in compliance. | 25/07/2026 | c) Developed with both contracting and in-house resources | MIT Lincoln Labs | Yes | Initial assessment of whether a care provider is in compliance with applicable laws, standards, policies, and procedures applicable to the care of unaccompanied children, with an explanation of evidence pulled from monitoring visit reports and the policy documents and accurate citations. AI is not used to suggest corrective actions but rather support determination of whether care providers are in compliance. | RAG implementation using commercially-available LLMs and curated dataset of applicable laws, standards, policies, and procedures applicable to the care of unaccompanied children | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Unaccompanied Children Process Model Digital Twins | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Analyzing Public Comments on Proposed Rule | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Outreach List Segmentation | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | https://www.hhs.gov/about/agencies/asa/ohr/hr-library/index.html | Ask HR Policy | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Generative AI | How can ACF better support Executive Officers and Administrative Officers find relevant Human Resources (HR) policy? HHS has over 70 HR policies that ACF Executive Officers/Administrative Officers must navigate when trying to find an answer to a question. | Faster fact-finding on HHS's HR policies Rather than clicking through multiple policies to try to identify the ones with relevant information to their question, Executive Officers/Administrative Officers can ask a question in natural language. | Suggested answer to an HR-related question, with thought process and links to relevant section of official document(s). Ask HR Policy provides a secure interface permissioned only to select ACF Executive Officers and Administrative Officers. Users type in questions about managing employees covered by the HR Policy Library, and Ask HR Policy provides a narrative of its thought process then suggests an answer based in the documentation, alongside links that take the user to the relevant section of the official document. | 24/05/2026 | a) Purchased from a vendor | Palantir | Yes | Suggested answer to an HR-related question, with thought process and links to relevant section of official document(s). Ask HR Policy provides a secure interface permissioned only to select ACF Executive Officers and Administrative Officers. Users type in questions about managing employees covered by the HR Policy Library, and Ask HR Policy provides a narrative of its thought process then suggests an answer based in the documentation, alongside links that take the user to the relevant section of the official document. | RAG implementation using commercially-available LLMs and HHS's Official HR Policy Library | https://www.hhs.gov/about/agencies/asa/ohr/hr-library/index.html | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||
| Department Of Health And Human Services | HHS/ACF | Child Welfare Information Automated Inquiry System (Note: previously named "Child Welfare Information Gateway OneReach Application") | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | How can Child Welfare Information Hotline callers get the right information faster without increasing staffing? The Children's Bureau runs the Children Welfare Information Gateway, a connection to trusted resources on the child welfare continuum. The Information Gateway has a hotline for answering questions or requesting information: https://www.childwelfare.gov/stay-connected/contact/. Callers to the hotline range from those having more routine questions (such as asking for the contact information for their state's child welfare agency) to reporting more complex, nuanced situations. | Approximately a quarter of inquires to the Child Welfare Information Hotline are assisted by AI, freeing up time for staff to focus on more complex, nuanced cases. In the first 4 years, this amounts to ~2,500 inquiries assisted by AI. | The Information Gateway Hotline connects to a phone interactive voice response (IVR). The Information Gateway hotline maintains a database of state hotlines for reporting child abuse and neglect that it can connect a caller to based on their inbound phone area code. Additionally, the Information Gateway Hotline offers a limited FAQ texting service that utilizes natural language processing to answer user queries. | 20/03/2026 | a) Purchased from a vendor | Amazon Connect (current) OneReach (previous, deprecated) | No | The Information Gateway Hotline connects to a phone interactive voice response (IVR). The Information Gateway hotline maintains a database of state hotlines for reporting child abuse and neglect that it can connect a caller to based on their inbound phone area code. Additionally, the Information Gateway Hotline offers a limited FAQ texting service that utilizes natural language processing to answer user queries. | User queries are used for reinforcement training by a human AI trainer and to develop additional FAQs. | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Collective Bargaining Compass | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Generative AI | How can the HHS-NTEU collective bargaining agreement be more easily referenced? The HHS NTEU Collective Bargaining Agreement and associated rules are over 400 pages long, covering numerous topics related to employer-labor relations. | Faster fact-finding on the HHS-NTEU collective bargaining agreement. Rather than searching for relevant passages through keyword matching, people can ask their questions in natural language. | Suggested answer to a question, with thought process and links to relevant section of official document(s). The Collective Bargaining Compass provides a secure Virtual Assistant interface permissioned only to select ACF managers. Users type in questions about managing employees covered by the Collective Bargaining Agreement, and the Virtual Assistant provides a narrative of its thought process then suggests an answer based in the documentation, alongside links that take the user to the relevant section of the official document. | 24/02/2026 | a) Purchased from a vendor | Palantir | Yes | Suggested answer to a question, with thought process and links to relevant section of official document(s). The Collective Bargaining Compass provides a secure Virtual Assistant interface permissioned only to select ACF managers. Users type in questions about managing employees covered by the Collective Bargaining Agreement, and the Virtual Assistant provides a narrative of its thought process then suggests an answer based in the documentation, alongside links that take the user to the relevant section of the official document. | RAG implementation using commercially-available LLMs and the latest HHS/NTEU collective bargaining agreement, plus any relevant procedures and follow-on memoranda. | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Discover User Assistant | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | How can ACF Discover users better understand how to use the platform? ACF Discover is a relatively new platform designed to streamline various analyses and data maintenance responsibilities of ACF Executive Officers and other ACF administrative and management staff. When launched, ACF Discover users were trained with instructional documentation. However, some users still find it difficult to understand the different software modules and how to use them. | Easier navigation of the ACF Discover Staff Management platform Rather than searching for relevant passages through keyword matching, people can ask their questions in natural language. | Suggested answer to question, with thought process and links to relevant section of official document(s). The User Documentation Assistant provides a secure virtual assistant interface that is only available to ACF Discover Users. Users are able to ask the assistant specific questions about the capabilities of ACF Discover along with how to leverage tools and applications. The Virtual Assistant is able to provide answers by referencing the User Reference guide. | 24/01/2026 | a) Purchased from a vendor | Palantir | Yes | Suggested answer to question, with thought process and links to relevant section of official document(s). The User Documentation Assistant provides a secure virtual assistant interface that is only available to ACF Discover Users. Users are able to ask the assistant specific questions about the capabilities of ACF Discover along with how to leverage tools and applications. The Virtual Assistant is able to provide answers by referencing the User Reference guide. | RAG implementation using commercially-available LLMs and the latest ACF Discover user reference guide. | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Policy Knowledge Base Data Migration | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Qualitative Analysis | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | How can thematic coding and trend analysis across qualitative data be done more efficiently? ACF staff often conduct surveys and interviews, which generate qualitative data that needs to be analyzed for themes and trends. The standard approach involves multiple human passes of labeling the data for analysis, which is very time-intensive. | Faster initial labeling of qualitative data that human reviewers are then able to correct and iterate from | ACF employees have several tools available to them to support qualitative analysis. Typically the tools are asked to assist with one of the following scenarios: - Take a user-provided list of topics and text passages to initially categorize passages by topic(s) - Suggest potential categories for organizing text passages - Identify thematic trends across a corpus of narrative data - Conduct sentiment analysis | 23/03/2026 | a) Purchased from a vendor | Lumivero, Qualtrics, Credal, Ask Sage | Yes | ACF employees have several tools available to them to support qualitative analysis. Typically the tools are asked to assist with one of the following scenarios: - Take a user-provided list of topics and text passages to initially categorize passages by topic(s) - Suggest potential categories for organizing text passages - Identify thematic trends across a corpus of narrative data - Conduct sentiment analysis | RAG implementation using commercially-available LLMs and user-provided narrative data | Yes | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Intakes Referral Parser | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Training and Technical Assistance (TTA) GenAI Chatbot | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Funding Opportunity Redundancy Analysis | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Unaccompanied Child Sponsor Identity Verification | a) Pre-deployment The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | a) High-impact | High-impact | Computer Vision | How can ACF strengthen sponsor identity verification to reduce fraudulent sponsor applications? Throughout the process of sponsor vetting, there are various touchpoints where the identity of the sponsor is critical to ensuring a child will be placed with a safe guardian. Knowing that the person (sponsor, household adult) is who they claim to be and that the person presenting at different points of the sponsor application process is consistently the person who was vetted is essential to ensure the welfare of a child. | Increased certainty that the adults applying to sponsor/care for a child release from ORR care are who they claim to be, so that they may be properly vetted in providing a safe environment for children post-release. | Confirmation that person A is person A at all touchpoints of the sponsor vetting process. | Confirmation that person A is person A at all touchpoints of the sponsor vetting process. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Head Start Correspondence Categorizer | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | How can regional management and program specialists with large grant workloads keep track of all the correspondence being received by staff in their regional offices? The Office of Head Start's (OHS) program specialists receive correspondence through the Head Start Enterprise System (HSES) for a variety of topics. Many of the requests, questions, and reports are tracked to completion in another system that has more robust alerting and workflow management capabilities. OHS is automating the data transfer between these two systems, introducing a data processing step that helps categorize correspondence. | More efficient tracking of correspondence. The AI categorization of correspondence can help managers and program specialists more quickly identify correspondence that require more immediate action. | #NAME? | #NAME? | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ACF | Builder Buddy | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Agentic AI | How can ACF staff more readily develop their own tailored virtual assistants on our enterprise genAI platform? ACF's enterprise generative AI platform gives users the tools to create their own tailored virtual "agents" to assist with more specialized tasks specific to a user's work. Users do not need to code to build these virtual assistants but do have to provide adequate context and instructions. Builder Buddy lets users draft a virtual assistant through describing their needs and providing context in a natural conversational manner as an alternative to a form-based builder interface. | More ACF staff feel empowered and equipped to configure their own tailored virtual assistants, increasing the usefulness of LLMs beyond basic chat Tailored virtual assistants allow ACF staff to leverage LLMs in repeated workflows, reducing administrative burden and allowing staff to focus on higher-order analysis. | Draft virtual assistant configurations that users then further test and iterative on before deployment. | 25/05/2026 | a) Purchased from a vendor | Credal | Yes | Draft virtual assistant configurations that users then further test and iterative on before deployment. | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | |||||||||||||
| Department Of Health And Human Services | HHS/ACF | Document Review for Alignment with Executive Orders: Position Descriptions | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Generative AI | How can ACF identify position descriptions that may need to be adjusted for alignment with recent executive orders? ACF needed to conduct an audit across position descriptions in accordance with HHS Secretarial Directives related to recent executive orders such as Executive Order 14151 "Ending Radical and Wasteful Government DEI Programs and Preferencing" and Executive Order 14168 "Defending Women From Gender Ideology Extremism and Restoring Biological Truth to the Federal Government". | Increased efficiency of review, with reduced administrative burden on staff Staff were able to focus time more effectively by using AI to support the flagging of potential affected position descriptions. | Initial list of position descriptions for further review, validation, and adjustments as applicable by ACF's team. AI was not used to make any final determinations. It was leveraged to more effectively identify position descriptions that may require revision. | 25/03/2026 | c) Developed with both contracting and in-house resources | Palantir | Yes | Initial list of position descriptions for further review, validation, and adjustments as applicable by ACF's team. AI was not used to make any final determinations. It was leveraged to more effectively identify position descriptions that may require revision. | RAG implementation using commercially-available LLMs and user-provided position descriptions | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Document Review for Alignment with Executive Orders: Grant Materials | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Generative AI | How can ACF identify grants that may need to be reviewed for alignment with recent executive orders? ACF needed to conduct an audit across existing grants and new grant applications in accordance with HHS Secretarial Directives related to recent executive orders such as Executive Order 14151 "Ending Radical and Wasteful Government DEI Programs and Preferencing" and Executive Order 14168 "Defending Women From Gender Ideology Extremism and Restoring Biological Truth to the Federal Government". | Increased efficiency of review, with reduced administrative burden on staff To increase the efficiency of the executive order alignment review, ACF is leveraging an AI-based process that reviews application submission files and generates initial flags and priorities for discussion, which are then routed to ACF Program Office staff for final review, justification, and recommendation. | List of grants for program staff to review, with an initial assessment of compliance against executive orders and example passages from the grant materials for flagged grants. Staff are only able to view grants associated with their program office. In addition to the short summary of the results from our AI processing, staff are presented with links to associated grant files to reference while doing their review and making grant compliance assessments. | 25/03/2026 | c) Developed with both contracting and in-house resources | Palantir, Credal | Yes | List of grants for program staff to review, with an initial assessment of compliance against executive orders and example passages from the grant materials for flagged grants. Staff are only able to view grants associated with their program office. In addition to the short summary of the results from our AI processing, staff are presented with links to associated grant files to reference while doing their review and making grant compliance assessments. | RAG implementation using commercially-available LLMs and user-provided grant materials | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Grant management support: structuring information in applications | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Agentic AI | How can ACF staff more efficiently review grant applications that have non-standardized formats? Grant applications for ACF-funded programs can come in different formats, even when a standard set of questions or template is provided. ACF staff evaluating applications look for explanation and details to assess against pre-established evaluation criteria. Going back-and-forth across an application to find the relevant explanation can be time intensive, especially for applications that include multiple documents spanning 50+ pages. | Increased efficiency of review so that more time can be spent on grant application analysis and evaluation | Varies, depending on the program office. Outputs generally involve summarizing information, extracting key information into a specific format, flagging potential gaps or inconsistencies, and providing citations / page numbers to support follow-up review and validation. In all cases, AI is only used to support review of grant applications but does not make any final determinations for awards. | 25/07/2026 | c) Developed with both contracting and in-house resources | Palantir, Credal | Yes | Varies, depending on the program office. Outputs generally involve summarizing information, extracting key information into a specific format, flagging potential gaps or inconsistencies, and providing citations / page numbers to support follow-up review and validation. In all cases, AI is only used to support review of grant applications but does not make any final determinations for awards. | RAG implementation using commercially-available LLMs and user-provided grant applications | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Acquisition support: co-drafting acquisition packages | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Generative AI | How can ACF acquisition teams more efficiently draft acquisition packages? In addition to tailored performance work statements (PWS), acquisition packages include multiple documents that often require information based on the PWS. There are also instances where a recompete is issued that largely follows a previous contract, with some updates to volumes. Different contract awarding agencies have different formats. ACF's acquisition teams therefore commonly need to repackage information. | Increased efficiency in preparing acquisition packages so that more time is spent on the substance of scoping contracts and less time on rote drafting | Draft language for various parts of an acquisition package based on user-provided context and direction. For instance, based on a provided set of task narratives, a user may ask a large language model to draft the table of deliverables. Based on a draft set of requirements, a user may ask the large language model to provide an initial suggestion for organizing tasks. Based on a copy of a previous modification memo and an executed contract, a user may ask a large language model to draft a new modification memo to exercise the next option year. | 24/12/2026 | c) Developed with both contracting and in-house resources | Credal, Ask Sage, Microsoft | Yes | Draft language for various parts of an acquisition package based on user-provided context and direction. For instance, based on a provided set of task narratives, a user may ask a large language model to draft the table of deliverables. Based on a draft set of requirements, a user may ask the large language model to provide an initial suggestion for organizing tasks. Based on a copy of a previous modification memo and an executed contract, a user may ask a large language model to draft a new modification memo to exercise the next option year. | RAG implementation using commercially-available LLMs and user-provided context on acquisition needs | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/ACF | Acquisition support: assisting reviews and co-drafting technical evaluation documents | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Generative AI | How can ACF teams more efficiently review contract proposals and summarize technical evaluation discussions? In response to Requests for Information and Requests for Proposals, ACF teams receive responses from interested vendors. In the review process, ACF staff need to provide summarized comments for each response on potential suitability for delivering the work. These summaries are based on individual and group review against pre-established criteria. When there is a high volume of vendor responses, review teams have many summaries to write. | Increased efficiency in drafting technical evaluation documents, so that more time is spent on review and analysis and less time is spent on "blank screen syndrome" | Draft language for technical evaluation documents, based on user-provided context, direction, analysis, and examples. For example, the user may provide a statement on why they assess a proposal to be unsuitable based on the evaluation criteria, and then leverage the AI tool to draft language to pull and format specific examples with page citations from the proposal. AI is only used to draft language and make it easier to find relevant passages in proposal materials. AI is not used to make final determinations. The technical evaluators review and revise as needed all AI-drafted language, verifying accuracy of any cited excerpts. | 25/07/2026 | c) Developed with both contracting and in-house resources | Credal, Ask Sage, Microsoft | Yes | Draft language for technical evaluation documents, based on user-provided context, direction, analysis, and examples. For example, the user may provide a statement on why they assess a proposal to be unsuitable based on the evaluation criteria, and then leverage the AI tool to draft language to pull and format specific examples with page citations from the proposal. AI is only used to draft language and make it easier to find relevant passages in proposal materials. AI is not used to make final determinations. The technical evaluators review and revise as needed all AI-drafted language, verifying accuracy of any cited excerpts. | RAG implementation using commercially-available LLMs and received capability statements | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | k) None of the above | No | To be posted on https://www.hhs.gov/pia/index.html, pending HHS OCIO action | ||||||||||||
| Department Of Health And Human Services | HHS/AHRQ/CQuIPS | AHRQ Search | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Organization wide search that includes Relevancy Tailoring, Auto-generation Synonyms, Automated Suggestions, Suggested Related Content, Auto Tagging, and "Did you mean?" to allow visitors to find specific content. This AI use case enhances our agency's efficiency and user experience by optimizing search results, auto-completing queries, suggesting relevant searches and content tags, as well as proposing spelling corrections. | This AI use case aims to optimize search results by adjusting their ranking, ensuring that the most pertinent information is displayed at the top. It also enhances search effectiveness by adding synonyms to queries behind the scenes. It further improves user experience by auto-completing queries as they are being typed, and showing related searches that might offer additional valuable insights. The system also proposes content tags automatically, leveraging machine learning to assess existing content tagging patterns. Additionally, it suggests spelling corrections and reformats search queries based on data from Google Analytics. | This AI use case aims to optimize search results by adjusting their ranking, ensuring that the most pertinent information is displayed at the top. It also enhances search effectiveness by adding synonyms to queries behind the scenes. It further improves user experience by auto-completing queries as they are being typed, and showing related searches that might offer additional valuable insights. The system also proposes content tags automatically, leveraging machine learning to assess existing content tagging patterns. Additionally, it suggests spelling corrections and reformats search queries based on data from Google Analytics. | 19/09/2026 | c) Developed with both contracting and in-house resources | RIVA Solutions | Yes | This AI use case aims to optimize search results by adjusting their ranking, ensuring that the most pertinent information is displayed at the top. It also enhances search effectiveness by adding synonyms to queries behind the scenes. It further improves user experience by auto-completing queries as they are being typed, and showing related searches that might offer additional valuable insights. The system also proposes content tags automatically, leveraging machine learning to assess existing content tagging patterns. Additionally, it suggests spelling corrections and reformats search queries based on data from Google Analytics. | Website Data | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/AHRQ/CQuIPS | Chatbot | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/AHRQ/CQuIPS | Chatbot | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/AHRQ/OEREP | Enhancing Diversity in Peer Review - Pilot | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/AHRQ/CQuIPS | NLQuery- As Data or Pulse | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/AHRQ/CQuIPS | Quality and Safety Review System AI-enabled automated abstraction | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/AHRQ/CFACT | AI DevOps - Improving Development and CI/CD Operations | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Introducing AI to DevOps can identify and reduce errors, shorten release cycles, and empower development teams with data-driven insights, resulting in faster continuous integration and shorter development lifecycles. | Integrating AI into DevOps pipeline can boost efficiency, enhance code quality, and accelerate development cycle. AI code review uses artificial intelligence algorithms to analyze source code for potential issues. Initial integration of AI code review can assist in detecting bugs, security vulnerabilities, performance bottlenecks, and deviations from coding standards. | Integrating AI into DevOps pipeline can boost efficiency, enhance code quality, and accelerate development cycle. AI code review uses artificial intelligence algorithms to analyze source code for potential issues. Initial integration of AI code review can assist in detecting bugs, security vulnerabilities, performance bottlenecks, and deviations from coding standards. | Pingwind | Integrating AI into DevOps pipeline can boost efficiency, enhance code quality, and accelerate development cycle. AI code review uses artificial intelligence algorithms to analyze source code for potential issues. Initial integration of AI code review can assist in detecting bugs, security vulnerabilities, performance bottlenecks, and deviations from coding standards. | |||||||||||||||||||||
| Department Of Health And Human Services | HHS/AHRQ/CEPI | USPSTF Public Forms Data AI Integration | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ASPR/ODAIA | emPOWER AI | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Public health authorities, first responders and others across the emergency management spectrum indicated that they had to be able to more rapidly access HHS emPOWER Map publicly available data, particularly in unstable internet conditions, during disasters. emPOWER AI, an Amazon Alexa Skill, was created to allow anyone with a smartphone, particularly emergency and first responders, to be able to request the data from the HHS emPOWER Map and receive it within seconds from the field to headquarters. | Public health authorities, first responders and others across the emergency management spectrum indicated that they had to be able to more rapidly access HHS emPOWER Map publicly available data, particularly in unstable internet conditions, during disasters. emPOWER AI, an Amazon Alexa Skill, was created to allow anyone with a smartphone, particularly emergency and first responders, to be able to request the data from the HHS emPOWER Map and receive it within seconds from the field to headquarters. For example, a local first responder in the field at a location of a disaster could rapidly identify the total number of Medicare beneficiaries that live independently in a given zip code, and may be adversely impacted by a rapidly progressing flood or wildfire emergency and use this information to inform decision-making on evacuation assistance resources and teams. | emPOWER AI gives the user publicly available data from the HHS emPOWER Map on the number of electricity-dependent Medicare beneficiaries at the national, state, territory, county, and ZIP Code levels. | 19/12/2026 | c) Developed with both contracting and in-house resources | Communications Training & Analysis Corporation (CTAC) | Yes | emPOWER AI gives the user publicly available data from the HHS emPOWER Map on the number of electricity-dependent Medicare beneficiaries at the national, state, territory, county, and ZIP Code levels. | Publicly available de-identified data on the HHS emPOWER Map | No | Publicly available data, PIA is under the HHS emPOWER Map ATO. | k) None of the above | No | No | Publicly available data, PIA is under the HHS emPOWER Map ATO. | |||||||||||
| Department Of Health And Human Services | HHS/ASPR/CP | Senior Leadership Briefing Generation | a) Pre-deployment The use case is in a development or acquisition status. | Emergency Management | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The manual process of parsing and understanding documents, and summarizing their content | Saves the SLB team time manually typing out a briefing | Automatically generates a Senior Leadership Briefing based on the user inputted requirements and documents | Automatically generates a Senior Leadership Briefing based on the user inputted requirements and documents | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ASPR/CP | AIP Cyber Incident Ingestion | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Cybersecurity | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The manual process of parsing out cyber incidents and entering them into the system | Allows the Cyber team to quickly ingest cyber incident data | Automatically parses and creates cyber incidents based on the user inputted description or email | 25/11/2026 | b) Developed in-house | Palantir | Yes | Automatically parses and creates cyber incidents based on the user inputted description or email | ASPR Cyber Incident Descriptions | No | N/A | k) None of the above | Yes | N/A | ||||||||||||
| Department Of Health And Human Services | HHS/ASPR/CP | ASPR TRACIE - Web search results improvement (prototyping stage) | a) Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Web search results improvement - ASPR TRACIE | Public users searching on the website can expect to see more relevant and improved search results. | The output will bring more relevant search results by improving current search results. It will use both keywords and natural language to bring more relevant results. | The output will bring more relevant search results by improving current search results. It will use both keywords and natural language to bring more relevant results. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ASFR/OA | PRISM Ally | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Generative AI | Assists users with federal regulatory and agency policy questions related to acquisition | A.I. will help HHS deliver faster, higher-quality public services and measurably improve mission outcomes by cutting cycle times and backlogs, boosting accuracy, and increasing first-contact resolution and customer satisfaction. It will drive cost avoidance and productivity through automation and reuse of shared data/models and code, lowering unit costs per transaction while protecting taxpayer dollars. Built-in accessibility, interpretability, and human-in-the-loop safeguards strengthen equity, fairness, and public trust, with transparent citations, monitoring, and appeal mechanisms. The workforce benefits from targeted upskilling and copilots that reduce manual research and documentation, improving time-to-competency and decision quality. Data quality and interoperability improve via standardized metadata, provenance, and sharingenabling secure, portable, and interoperable solutions that reduce vendor lock-in and long-term risk. Success will be tracked with concrete metrics such as cycle-time reduction, error-rate and rework decreases, customer experience score gains, cost-per-action savings, accessibility conformance, reuse/adoption counts, training completions, and compliance/incident rates. | Ally utilizes a Retrieval-Augmented Generation (RAG) approach to develop answers to user submitted questions. The user submits a query and any prompt instruction needed through the PRISM Ally user interface. Using the query, PRISM Ally performs a vector search of its private knowledge repository to identify relevant information that can provide enhanced context for developing the answer. The user query, prompt, and enhanced context are then passed to the LLM. The LLM considers the information and returns an answer. The utilization of enhanced context provides guardrails for the LLM and helps to increase the accuracy of the answers provided. | 25/06/2026 | a) Purchased from a vendor | Unison | No | Ally utilizes a Retrieval-Augmented Generation (RAG) approach to develop answers to user submitted questions. The user submits a query and any prompt instruction needed through the PRISM Ally user interface. Using the query, PRISM Ally performs a vector search of its private knowledge repository to identify relevant information that can provide enhanced context for developing the answer. The user query, prompt, and enhanced context are then passed to the LLM. The LLM considers the information and returns an answer. The utilization of enhanced context provides guardrails for the LLM and helps to increase the accuracy of the answers provided. | The PRISM Ally application and private knowledge repository are located within the Unison Cloud, in a FedRAMP moderate environment. The application and repository are maintained by Unison. Regulatory content (e.g. FAR, DFARS, agency supplementals) within the repository is sourced from government authenticated sources (e.g. acquisition.gov, ecfr.gov). Unison updates the regulatory content within the repository with each new regulatory update release. | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/ASTP | Certification and Testing/Program Administration AI-enabled Internal Processes | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | ONCs certification and related program operations rely on many manual, text-heavy, and repetitive tasks (e.g., analyzing surveillance reports, validating certification test results, generating public meeting materials, preparing communications, managing Jira tickets, summarizing standards/IG guidance, and drafting acquisition documents). These activities are time-consuming, error-prone, and difficult to scale as program workload increases. The AI use case is intended to automate or semi-automate routine document drafting, data summarization, basic analysis, and information retrieval across these functions so staff can focus on higher-value review, oversight, and decision-making. | Expected benefits include: (1) improved efficiency of internal certification and program operations (e.g., faster preparation of surveillance analyses, CHPL artifacts, and meeting materials); (2) reduced risk of omissions and inconsistencies in internal documents through standardized AI-assisted drafting and terminology checks; (3) quicker access to relevant information from CHPL data, Jira tickets, standards implementation guides, and financial spreadsheets; and (4) more timely, clear, and consistent public-facing communications and policy/support documents. Indirectly, these improvements support ONCs mission to advance safe, interoperable health IT by improving the quality and timeliness of its certification, oversight, and communication activities. | AI-generated or AI-assisted outputs include: (1) draft analytical summaries and reports (e.g., surveillance reporting analysis, RWT results validation, SED categorization, data visualizations); (2) draft public-facing and stakeholder communications (e.g., webinar Q&As, plain-language explanations of regulatory or standards text, communication templates); (3) internal operational artifacts (e.g., CHPL backups, release notes, Jira responses and ticket summaries, Excel query results); and (4) first drafts of acquisition and planning documents (e.g., statements of work, market research, memoranda of need, acquisition plans). All outputs are reviewed, edited, and approved by ONC staff before use. | AI-generated or AI-assisted outputs include: (1) draft analytical summaries and reports (e.g., surveillance reporting analysis, RWT results validation, SED categorization, data visualizations); (2) draft public-facing and stakeholder communications (e.g., webinar Q&As, plain-language explanations of regulatory or standards text, communication templates); (3) internal operational artifacts (e.g., CHPL backups, release notes, Jira responses and ticket summaries, Excel query results); and (4) first drafts of acquisition and planning documents (e.g., statements of work, market research, memoranda of need, acquisition plans). All outputs are reviewed, edited, and approved by ONC staff before use. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/ASTP | The NEF DSI/AI project addresses local validation of AI-based clinical decision support (CDS)/decision support interventions (DSI) in provider settings. | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Other | Assess quality of AI-based clinical decision support (CDS) tools | [LAVA, or "Local AI Evaluator", does not itself use AI.] LAVA would assist clinicians in assessing the accuracy, and therefore usefulness, of AI diagnosis tools. AI diagnosis tools are developed to apply to a national population, rather than to smaller, local populations, such as those served by small providers with one or few physical locations. These smaller, local patient populations may have different demographics than that on which the AI-based tool was trained, so the LAVA tool can help illuminate these differences and how the AI tool may apply to the local population. This can help providers learn how to best use their AI diagnosis tools. | Outputs are not generated by AI, but rather use open source information to assess outputs from other AI-based tools. This tool's outputs are metrics that measure, for example, accuracy and precision of external AI predictions of disease onsets in local patient populations. | Outputs are not generated by AI, but rather use open source information to assess outputs from other AI-based tools. This tool's outputs are metrics that measure, for example, accuracy and precision of external AI predictions of disease onsets in local patient populations. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | HaMLET: Harnessing Machine Learning to Eliminate Tuberculosis | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Adverse Childhood Experiences (ACEs) Literature Review Dashboard | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Automated Analysis of Injury Control Research Center (ICRC) Annual Progress Reports (APRs) using Large Language Models | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is designed to streamline the review process of Annual Progress Reports (APRs) submitted by Injury Control Research Centers (ICRCs), improve efficiency, and support the evaluation of the performance and progress of ICRC-funded activities. | The AI will help quickly and efficiently identify key challenges and insights from ICRC APRs, enabling more effective decision-making in the review process. By automating the extraction and analysis of critical information, the AI allows the ICRC team to focus on higher-level evaluation and strategic planning. This will reduce the time and resources needed for manual review, improve the consistency and accuracy of assessments, and facilitate faster responses to ICRC needs. Ultimately, this will support ICRCs in overcoming challenges and achieving their research and injury control goals, benefiting the public health system as a whole. | The AI analyzes the textual content of APRs, focusing initially on sections detailing the challenges faced by ICRCs. It identifies key themes, trends, and critical information that may require further attention. The AI methodology extracts insights and patterns from the data, which can then be compared with manual qualitative analysis outcomes. In subsequent stages, the AI will be expanded to analyze other sections of the APRs, such as progress toward goals and program impact. | 23/08/2026 | b) Developed in-house | Yes | The AI analyzes the textual content of APRs, focusing initially on sections detailing the challenges faced by ICRCs. It identifies key themes, trends, and critical information that may require further attention. The AI methodology extracts insights and patterns from the data, which can then be compared with manual qualitative analysis outcomes. In subsequent stages, the AI will be expanded to analyze other sections of the APRs, such as progress toward goals and program impact. | Injury Control Research Center Annual Progress Reports | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Detecting Stimulant and Opioid Misuse and Illicit Use | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To detect and analyze non-therapeutic (illicit or misuse) stimulant and opioid use from free-text clinical notes in EHRs, which is not possible using standard medical codes. | The AI models enable the extraction of novel insights from EHRs regarding non-therapeutic drug use, improving the statistical analysis of health data for the National Hospital Care Survey (NHCS). This supports more accurate public health statistics and may influence analysis of other datasets with EHR clinical notes. | Two machine learning models (one for internal use, one for public release) that, together with rule-based text analysis, determine whether a patient has used a drug therapeutically or non-therapeutically, providing new insights for health statistics. | 24/03/2026 | b) Developed in-house | No | Two machine learning models (one for internal use, one for public release) that, together with rule-based text analysis, determine whether a patient has used a drug therapeutically or non-therapeutically, providing new insights for health statistics. | National Hospital Care Survey 2020 clinical notes | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | DHP Virtual Assistant | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To assist HIV researchers by retrieving relevant information related to HIV and HIV research, improving productivity and efficiency in literature review. | Expected to increase productivity among HIV researchers by streamlining information retrieval for HIV research. | The AI assistant uses retrieval augmented generation (RAG) to return information related to HIV based on a user's query and other documents. | 24/11/2026 | b) Developed in-house | Yes | The AI assistant uses retrieval augmented generation (RAG) to return information related to HIV based on a user's query and other documents. | Internal documentation containing mapped eHARS LOINC codes, IQVIA data dictionaries, APRs | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Fuzzy matching tool | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To improve clearance and review management of CDC publications by identifying similar records between eClearance submissions and CDC-authored publications, ensuring compliance with NIHMS and CDC public access policies, and supporting science prioritization and impact analyses. | The tool is expected to streamline the clearance process, ensure compliance with public access policies, and assist in identifying and prioritizing CDC-authored publications. This will save staff time and improve the efficiency and accuracy of publication management. | The tool outputs matched records between eClearance submissions and CDC-authored publications, identifying potential duplicates or related documents for internal review. | 23/03/2026 | b) Developed in-house | No | The tool outputs matched records between eClearance submissions and CDC-authored publications, identifying potential duplicates or related documents for internal review. | eClearance submissions data and Science Clips data | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Reviewing Global Influenza Vaccine Literature | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To efficiently review a large volume of published literature and identify abstracts related to access to influenza vaccines. | The AI system is expected to save significant staff time by automating the initial literature review process, allowing epidemiologists to focus on in-depth analysis of relevant publications. This increases efficiency and scalability in reviewing global literature related to vaccine access. | A list of abstracts from published journal articles that are relevant to vaccine access. These abstracts are identified using large language models and are then reviewed manually for further analysis. | 24/10/2026 | b) Developed in-house | Yes | A list of abstracts from published journal articles that are relevant to vaccine access. These abstracts are identified using large language models and are then reviewed manually for further analysis. | Published journal articles accessed through freely available sources or via CDC research agreements. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | HIV Data Virtual Assistant | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To improve the data retrieval and automation process for HIV researchers by providing information from datasets and generating code for data analysis, thereby alleviating challenges in finding appropriate HIV-related data. | The AI assistant will improve productivity and research efforts by streamlining the process of finding relevant HIV datasets and generating analysis code, saving researchers time and enabling more efficient data-driven research. | The AI system uses retrieval augmented generation (RAG) to return information related to HIV based on user queries. Outputs include lists of datasets, associated variable names, and code (SAS, R, Python) for analysis, as well as specific dataset information based on researcher queries. | The AI system uses retrieval augmented generation (RAG) to return information related to HIV based on user queries. Outputs include lists of datasets, associated variable names, and code (SAS, R, Python) for analysis, as well as specific dataset information based on researcher queries. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Identify infrastructure supports for physical activity (e.g. sidewalks) in satellite and roadway images | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Computer Vision | To streamline and automate the surveillance of sidewalks and other infrastructure that support physical activity, reducing the labor and cost associated with manual inspection. | The technology has the potential to significantly minimize the effort required for cataloging sidewalks and related infrastructure, which are important for promoting physical activity. This could lead to more efficient and cost-effective surveillance, supporting public health monitoring and interventions. | Outputs include geocoded data tables, maps, GIS layers, or summary reports identifying sidewalks, bicycle lanes, and other relevant infrastructure from satellite and roadway images. | 23/09/2026 | c) Developed with both contracting and in-house resources | No | Outputs include geocoded data tables, maps, GIS layers, or summary reports identifying sidewalks, bicycle lanes, and other relevant infrastructure from satellite and roadway images. | Publicly-available images were used for model evaluation | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Immunization Information Systems Guidance Documentation Navigation and Management (IDAB EDAV Azure OpenAI Technology Use) | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | To provide a modernized, efficient, and user-friendly method for CDC staff to retrieve, interact with, and update Immunization Information Systems (IIS) guidance documentation, improving knowledge retrieval and supporting the creation and validation of new guidance documents. | This AI solution enables faster, more actionable access to IIS guidance for subject matter experts, helps new employees find information more easily, and improves understanding of best practices. It streamlines the process of drafting, refining, and validating new guidance documents, increasing efficiency and accuracy in knowledge management. | The AI system provides synthesized answers to user queries in a Q&A interface, retrieving and summarizing information from publicly available IIS guidance documents. Outputs include generated text responses, draft guidance documents, and updated documentation. | The AI system provides synthesized answers to user queries in a Q&A interface, retrieving and summarizing information from publicly available IIS guidance documents. Outputs include generated text responses, draft guidance documents, and updated documentation. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | LaserAI | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To reduce screening time and improve accuracy in all phases of systematic reviews by automating title and abstract screening, PDF retrieval, full-text review, and data extraction. | The AI is expected to streamline systematic review processes, reduce manual effort, and improve accuracy in identifying and extracting relevant data. The synthesized and graded data will inform the development of evidence-based infection prevention and control recommendations for healthcare settings. | The AI system outputs include prioritized lists of potentially relevant articles for screening, retrieved PDFs from PubMed, and suggested data for extraction from PDFs. | 24/04/2026 | a) Purchased from a vendor | LaserAI | No | The AI system outputs include prioritized lists of potentially relevant articles for screening, retrieved PDFs from PubMed, and suggested data for extraction from PDFs. | None: Any data used to train the AI is publicly available, peer-reviewed data. | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | NewsScape | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Emergency Management | Pilot | c) Not high-impact | Not high-impact | Generative AI | Development of early warning indicators. News articles can form an early warning indicator of public health events and other pieces of information which can be utilized across all major domains of public health. Because of the quantity of news articles, manual efforts are impossible to gather this information. | There are a variety of endpoints that this AI output could help support. Various teams across the CDC have expressed interest in being able to quickly get the right news content, with summaries of those articles to help support outbreak detection, report generation, surveillance and monitoring of pathogen specific news, etc. AI lets us efficiently filter and summarize thousands of news articles a day into a handful of daily "news events" that user can glean information from. | NewsScape is a AI enabled news aggregation and summarization that is hosted within the 1CDP platform. The main motivation to build NewsScape was to develop a system that uses Large Language Models (LLMs) to surface relevant insight from recent news articles. NewsScape ingests a high volume of news articles, in the order of thousands of news articles everyday, and surfaces the information from them that related to the topic we were interested (for example, pathogen related news articles, or U.S. medical supply chain updates). The tool can be customized based on specific program office needs, and can be deployed independently of one another so that one program office can have their own custom version of NewsScape installed. | 23/01/2026 | c) Developed with both contracting and in-house resources | Palantir Technologies | Yes | NewsScape is a AI enabled news aggregation and summarization that is hosted within the 1CDP platform. The main motivation to build NewsScape was to develop a system that uses Large Language Models (LLMs) to surface relevant insight from recent news articles. NewsScape ingests a high volume of news articles, in the order of thousands of news articles everyday, and surfaces the information from them that related to the topic we were interested (for example, pathogen related news articles, or U.S. medical supply chain updates). The tool can be customized based on specific program office needs, and can be deployed independently of one another so that one program office can have their own custom version of NewsScape installed. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Portfolio Analytics | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To automate the identification of themes within CDC-authored publications, providing richer information for science prioritization, evaluation, and communication. | Automated theme identification will provide centers and divisions with richer, more actionable information about their publications. Combined with impact metrics, this will aid in science prioritization, evaluation, and communication, supporting more effective and efficient scientific resource allocation. | The system outputs themes or topic clusters identified within CDC-authored publications. | 24/03/2026 | b) Developed in-house | No | The system outputs themes or topic clusters identified within CDC-authored publications. | eClearance submissions data and Science Clips data | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | RAPID Analysis of Policy and Program Documents (RAPID) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To streamline and automate the review and evaluation of policy and program documents, saving staff time, reducing the need for specialized training, providing consistent and complete answers, and enabling easy validation and collaboration. | The web application will save staff time, expand capacity, provide consistent and complete answers to policy surveillance and evaluation questions, reduce intra-rater variability, and enable easy validation and collaboration on policy projects. | An internal web application that allows users to import, store, search, and analyze policy or program documents; ask questions of relevant text segments; validate answers; and collaborate on projects. Outputs include plain language answers, binary codes or scores, and project-specific databases. | 25/09/2026 | c) Developed with both contracting and in-house resources | Yes | An internal web application that allows users to import, store, search, and analyze policy or program documents; ask questions of relevant text segments; validate answers; and collaborate on projects. Outputs include plain language answers, binary codes or scores, and project-specific databases. | RAPID analysis of DNPAO policy and program data using GPT results in project-specific databases. AI-generated data are compared to CDC manual reviews by SMEs for accuracy and reliability. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | SewerScout: Automated on-site sewage facility detection from aerial imagery to identify failed systems | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | To automate the identification of onsite wastewater systems and detect failed systems using aerial imagery, enabling public health departments to efficiently locate and assess septic systems without resource-intensive field visits. | The project will allow state, tribal, local, and territorial public health departments to more easily identify failing septic systems, address contamination risks, and improve disaster response by providing a ready catalog of systems. This will save time and resources compared to manual surveys, especially in rural and remote areas. | The system outputs include identification and mapping of onsite sewage facilities, with the intent to distinguish between functional and failed systems, supporting public health surveillance and intervention. | The system outputs include identification and mapping of onsite sewage facilities, with the intent to distinguish between functional and failed systems, supporting public health surveillance and intervention. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Sidekick Comms bot Offering User-friendly Tips (SCOUT) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To reduce the workload of health communicators by automating and simplifying the creation of web pages, social media posts, and other public-facing content, making information more accessible and understandable for the general public. | The solution accelerates content creation, reduces staff burden, and increases the accessibility and clarity of CDC information for the public. All AI-generated content is reviewed by experts to ensure accuracy and quality, supporting the CDCs mission to provide clear, science-based public health information. | The AI system generates plain language versions of existing web content, creates new content for web, social media, fact sheets, and graphics, and produces social media posts. All outputs are reviewed and edited by CDC experts before publication. | 25/01/2026 | c) Developed with both contracting and in-house resources | Yes | The AI system generates plain language versions of existing web content, creates new content for web, social media, fact sheets, and graphics, and produces social media posts. All outputs are reviewed and edited by CDC experts before publication. | The use case focuses on generating content from existing, publicly available CDC materials. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Transcribing Cognitive Interviews with Whisper | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Reducing the time and effort required to transcribe cognitive interviews for federal health survey research, enabling faster and higher-quality analysis of qualitative data. | The AI system is expected to significantly reduce the hours required for qualitative review by automating transcription, enabling immediate comparison of interview concepts and answers, and providing timestamps for easier reference. This will accelerate research publication and improve the quality of survey questions used in federal surveys. | The AI generates transcripts from recorded interviews, which are used by staff for qualitative research in support of federal health survey research. | 24/07/2026 | b) Developed in-house | No | The AI generates transcripts from recorded interviews, which are used by staff for qualitative research in support of federal health survey research. | No agency-owned data was used; publicly available data was used to evaluate performance. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Use of Natural Language Processing for Topic Modeling to Automate Review of Public Comments to Notice of Proposed Rulemaking | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Manual review of large volumes of public comments for Notices of Proposed Rulemaking is labor-intensive and time-consuming. The AI is intended to organize and cluster comments by theme, improving the efficiency and effectiveness of manual review and ensuring all topics are accurately reported. | The AI system will enhance the speed and quality of manual review of public comments, enable better thematic organization, and reduce the burden on staff. This supports compliance with legal requirements for public comment review and improves the insights gained from public input. | The AI generates clusters of similar public comments, organized by theme, to aid in manual review. | 23/04/2026 | b) Developed in-house | Yes | The AI generates clusters of similar public comments, organized by theme, to aid in manual review. | Not specified | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Short-term Forecasting of Severe Outcomes for Seasonal and Epidemic Pathogens | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Emergency Management | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Predict severe disease outcomes - such as emergency department visits or hospital admissions - over short time horizons (1-4 weeks) to improve situational awareness for planning and decision-making at the national, state, and local level. Traditional AI/ML models (e.g. time series models) are mainly used as baselines against which to test and improve more sophisticated modeling methods. | Providing timely, accurate, and actionable forward-looking information on severe disease outcomes to government officials and the public. | Current outputs include weekly state and national hospital admissions forecasts for COVID-19 and influenza (public-facing) and weekly state and national ED visit forecasts for COVID-19 and influenza (internal to CDC at this time). | 23/10/2026 | c) Developed with both contracting and in-house resources | Yes | Current outputs include weekly state and national hospital admissions forecasts for COVID-19 and influenza (public-facing) and weekly state and national ED visit forecasts for COVID-19 and influenza (internal to CDC at this time). | Internal and publicly available hospital admissions data collected through the National Hospital Safety Network (NSHN), internal and publicly available emergency department visit data collected through the National Syndromic Surveillance Program (NSSP), internal wastewater concentration data collected through the National Wastewater Surveillance System (NWSS) | No | k) None of the above | Yes | CFA's signal fusion modeling framework: https://github.com/CDCgov/pyrenew; CFA's renewal model implementation: https://github.com/CDCgov/pyrenew-hew; CFA-run COVID-19 Forecasting Hub: https://github.com/CDCgov/covid19-forecast-hub | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | CDC Chatbot - Enterprise Data Assistant | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | This tool is a general purpose assistant for CDC staff to ask questions of large sets of documents. Staff may take hours to find the relevant information, and this tool enables fast access powered by Retrieval Augmented Generation (RAG). | Staff have increased access to relevant information and documents through faster and easier knowledge management. | The system generates responses to staff questions based upon the provided available information. This includes citations and references to sections of available documents for staff to further explore. | 24/02/2026 | c) Developed with both contracting and in-house resources | Yes | The system generates responses to staff questions based upon the provided available information. This includes citations and references to sections of available documents for staff to further explore. | Documentation, standard operating procedures, or other materials supplied by staff may be used as content for the RAG Model here. Examples include the documentation from our Enterprise Data, Analytics, and Visualization platform explaining the available tools, products, and other features. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Leveraging AI for Metadata Tagging for Enterprise Data Catalog of CDC | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Manual metadata tagging for datasets in the CDC Enterprise Data Catalog is inconsistent, incomplete, and time-consuming. The AI automates and standardizes metadata tagging, improving catalog usability and reducing manual effort. | The AI increases the speed and consistency of metadata tagging, making the data catalog more usable for CDC staff. This reduces manual effort, improves the completeness and standardization of metadata, and helps staff more efficiently find and use relevant datasets. | The AI generates suggested metadata fields (tags) for each dataset based on existing metadata, which are then used by staff to improve dataset discovery and relevance in the enterprise data catalog. | 24/06/2026 | b) Developed in-house | Yes | The AI generates suggested metadata fields (tags) for each dataset based on existing metadata, which are then used by staff to improve dataset discovery and relevance in the enterprise data catalog. | Enterprise Data Catalog metadata fields | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Malaria parasites DNA barcode geography classification | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | To complement epidemiologic investigations of domestic malaria cases by determining the geographic origin of malaria parasite strains, helping to understand how the strain entered the US. | This AI supports epidemiological investigations by providing rapid, automated classification of malaria parasite genotypes to geographic origins. This enhances the ability to track and respond to malaria cases, especially those domestically acquired, and supports public health interventions. For more information, see the manuscript:?https://journals.asm.org/doi/full/10.1128/aac.01203-24 | The AI examines a sequence barcode/genotype and assigns the malaria parasite genotype to a geographic origin (e.g., continent or subregion). | 23/07/2026 | b) Developed in-house | Yes | The AI examines a sequence barcode/genotype and assigns the malaria parasite genotype to a geographic origin (e.g., continent or subregion). | Data used are a mixture of data generated at CDC and other data available publicly. CDC data: https://www.ncbi.nlm.nih.gov/bioproject/PRJNA428490/ https://www.ncbi.nlm.nih.gov/bioproject/PRJNA1092573/ https://www.ncbi.nlm.nih.gov/bioproject/?term=PRJNA1110244 Non-CDC data: https://apps.malariagen.net/apps/pf7/ Travel histories from case patients were used to assess model performance (see manuscript:?https://journals.asm.org/doi/full/10.1128/aac.01203-24) | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | The School Closure Awareness System | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Emergency Management | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To efficiently and accurately identify and categorize unplanned school closures across the U.S. using publicly available social media data, replacing a costly and labor-intensive manual process. | The AI system has saved nearly $2 million in contracting fees and reduced human work hours by 200 hours. It enables faster, more comprehensive, and more detailed capture of unplanned school closure data than the previous manual process, supporting CDCs emergency response and reporting obligations. | The system processes Facebook posts from about 40,000 school or district accounts, using a large language model to categorize posts as unplanned school closures (by event type: weather, health, facility, safety) and denote status changes (full closure, virtual, hybrid, early/late dismissal). Outputs are reviewed and recoded by staff every 24 hours. | 22/11/2026 | b) Developed in-house | Yes | The system processes Facebook posts from about 40,000 school or district accounts, using a large language model to categorize posts as unplanned school closures (by event type: weather, health, facility, safety) and denote status changes (full closure, virtual, hybrid, early/late dismissal). Outputs are reviewed and recoded by staff every 24 hours. | Publicly available Facebook posts from approximately 40,000 school or district accounts. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Using Generative AI for Stance Analysis of Public Comments on CDCs Proposed Rules | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Manual review of public comments for rulemaking is labor-intensive and time-consuming due to the volume and diversity of responses. The AI system automates stance analysis and topic modeling to improve efficiency, accuracy, and insight in the review process. | The AI system can save significant time for CDCs public policy experts by automating the categorization and stance analysis of public comments, enabling faster and more comprehensive insight gathering for regulatory analysis. This supports compliance with legal requirements and improves the quality of public policy review. | The system uses generative AI to analyze public comments, providing outputs such as stance (support/oppose/neutral), topics, and sentiment for each comment. These outputs aid regulatory analysts in reviewing and summarizing public feedback. | 23/07/2026 | b) Developed in-house | No | The system uses generative AI to analyze public comments, providing outputs such as stance (support/oppose/neutral), topics, and sentiment for each comment. These outputs aid regulatory analysts in reviewing and summarizing public feedback. | Public comments submitted in response to CDCs proposed rules. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | A reusable NLP pipeline for clinical narratives preprocessing and characterization | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Autocoding to Support Adverse Drug Event Surveillance | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Manual coding of adverse drug event reports is time-consuming and slows down the production of prevalence estimates. The AI model will automate and speed up the coding process for surveillance epidemiologists. | The AI model will help epidemiologists quickly determine whether reported adverse drug events meet surveillance case definitions, speeding up the coding process and enabling faster, more accurate prevalence estimates for the surveillance system. | The model takes a de-identified free-text description of a patient's emergency department visit, along with other pre-coded variables, and outputs the probability that the encounter meets the surveillance case definition for an adverse drug event. | 24/05/2026 | b) Developed in-house | No | The model takes a de-identified free-text description of a patient's emergency department visit, along with other pre-coded variables, and outputs the probability that the encounter meets the surveillance case definition for an adverse drug event. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Automating LIMS Bioinformatics Workflow Configuration and Enhancing Lab Quality Management with AI | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | Configuring and customizing bioinformatics workflows in Clarity LIMS is time-consuming and requires specialized expertise. The AI tool automates this process, enabling rapid deployment and lowering the barrier for laboratory staff. It also improves access to quality management and regulatory documentation. | The system can reduce the time required to configure Clarity LIMS workflows, enable rapid deployment during outbreaks, lower the expertise needed for workflow customization, and enhance team learning and training by providing easy access to relevant documentation and best practices. | The AI system converts natural language lab protocols into precise XML workflows compatible with Clarity LIMS and serves as an interactive knowledge base for laboratory quality management and regulatory documentation. | 24/01/2026 | b) Developed in-house | No | The AI system converts natural language lab protocols into precise XML workflows compatible with Clarity LIMS and serves as an interactive knowledge base for laboratory quality management and regulatory documentation. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | DGMH AI Chatbot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | Responding to inquiries is time-consuming for staff, especially when information is available but hard to find on public CDC webpages. The AI chatbot drafts initial responses using content from CDCs website, reducing turnaround time and freeing staff to focus on higher-priority tasks. | The chatbot will reduce turnaround time for responding to inquiries, improve consistency of responses, and allow staff to focus on other priorities. Evaluation will assess response accuracy, completeness, and revision needs, as well as consistency across similar inquiries. | The AI chatbot generates an initial draft response to inquiries using content from CDCs public-facing webpages. Each draft is reviewed and cleared through the existing CDC process before being sent. | 24/10/2026 | b) Developed in-house | No | The AI chatbot generates an initial draft response to inquiries using content from CDCs public-facing webpages. Each draft is reviewed and cleared through the existing CDC process before being sent. | Content from CDCs public-facing webpages | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Distiller SR: AI to screen research articles for Community Guide reviews | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Screening research articles for systematic reviews is time-consuming. The AI tool will automate and speed up the process, supporting the Community Preventive Services Task Force in making timely recommendations. | The AI tool may increase the speed of conducting systematic reviews, expediting the evaluation of public health programs for CPSTF recommendations. | The AI system uses machine learning to efficiently screen and identify research articles relevant to evaluating the effectiveness of interventions. | 23/05/2026 | b) Developed in-house | No | The AI system uses machine learning to efficiently screen and identify research articles relevant to evaluating the effectiveness of interventions. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Evaluating Generative AI for polio containment. | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Respiratory Virus Response (RVR) Data Analysis Concept | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | Summarizing subject matter expert (SME) interpretations and knowledge during a public health response is time-consuming. The AI aims to improve efficiency in knowledge dissemination by generating key takeaways from SME-provided information. | The AI solution will improve the efficiency of disseminating essential information to the public, enable quicker SME review and clearance, and enhance understanding of AI limitations (e.g., bias, hallucinations) for future public health applications. | The AI generates summaries of bulleted SME information, producing key takeaways for review and clearance. The system will be evaluated for its ability to contextualize responses and improve tone and style in future iterations. | 24/06/2026 | b) Developed in-house | No | The AI generates summaries of bulleted SME information, producing key takeaways for review and clearance. The system will be evaluated for its ability to contextualize responses and improve tone and style in future iterations. | Bulleted SME information and uncleared data from the Respiratory Virus Response (RVR) | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | School LLM initial abstract review process | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Manual review and categorization of thousands of research abstracts related to school readiness science is time-consuming. The AI enables efficient extraction and categorization of themes, reducing human effort and time. | The AI allows for efficient categorization of thousands of abstracts in a much shorter time frame, with less human effort, and presents results in a user-friendly dashboard for health scientists to use in research and decision-making. | The AI uses an LLM to extract data from abstract reviews and categorize relevant themes and topics into a user-friendly dashboard, enabling users to pull resources from 20122022 for specific school closure outcomes or themes. | 23/08/2026 | c) Developed with both contracting and in-house resources | No | The AI uses an LLM to extract data from abstract reviews and categorize relevant themes and topics into a user-friendly dashboard, enabling users to pull resources from 20122022 for specific school closure outcomes or themes. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | CDC Chatbot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | This tool is a general purpose assistant for CDC staff powered by Large Language Models. Staff can upload documents, summarize information, extract information, create content, develop software code, or general tasks to support operational efficiency. | Staff have used this tool to save an estimated 40,000 hours across various domains including efficiency gains with content creation, software development, and other back-of-the-house tasks within CDC. This has provided a greater than 500% ROI for the agency. | The system generates responses to staff questions in a general purpose manner including related to questions of uploaded documents. Staff may use the generated text in any manner they deem as appropriate. | 24/02/2026 | c) Developed with both contracting and in-house resources | Yes | The system generates responses to staff questions in a general purpose manner including related to questions of uploaded documents. Staff may use the generated text in any manner they deem as appropriate. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | EDAV Virtual Assistant - Eva ( Microbot Service) - Bot as a Service (BaaS) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | This tools serves as a helpdesk to support CDC staff in finding relevant information related to the Enterprise Data, Analytics, and Visualization platform available across existing documentation. | Increased access for staff to documentation and decreased time spent searching for relevant information. This includes a reduction in number of support tickets from staff. | Staff will ask the chatbot questions related to various platform documentation. The AI use cases supplies back information including references and citations to the existing documentation for staff to go to. | 25/08/2026 | c) Developed with both contracting and in-house resources | Yes | Staff will ask the chatbot questions related to various platform documentation. The AI use cases supplies back information including references and citations to the existing documentation for staff to go to. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | EDAV Azure DataFactory - Pipeline failure Analysis | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | There are over 4,000 pipelines which provide logs of their status. Manual review is highly time-consuming and error prone due to the scale of logs. | Benefits include improved quality of summaries of logs and additional information that is available faster than traditional manual processes. | Logs of the Data Factory pipelines and analysis of the various information from the data pipelines including recommended next steps to support staff in solving potential challenges maintaining these data pipelines. | 25/07/2026 | c) Developed with both contracting and in-house resources | Yes | Logs of the Data Factory pipelines and analysis of the various information from the data pipelines including recommended next steps to support staff in solving potential challenges maintaining these data pipelines. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | 1CDP (SEDRIC) AIP for Advanced Foodborne Outbreak Investigation (AI Summarization and Receipt Reading) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Emergency Management | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This AI solution impacts the process of investigating foodborne disease outbreaks. These foodborne outbreaks require cooperative efforts from CDC staff, FDA, USDA, and local agencies and the AI system is used through a centralized data platform System for Enteric Disease Response, Investigation, and Coordination (also known as SEDRIC). For more information on SEDRIC, please go to our website: https://www.cdc.gov/foodborne-outbreaks/php/foodsafety/tools/index.html | SEDRIC's AIP use case provides CDC epidemiologists the ability to accelerate their investigations of multi-state foodborne disease outbreaks by more effectively leveraging data available in a rich data source, such as receipts from grocery stores, that otherwise requires extensive time and human effort to parse through. In addition, this workflow would free up epidemiologists' time and, potentially, increase the frequency with which both CDC and STLT partners could utilize shopper card, receipts, and free text responses to support investigations. There are two main expected benefits from this use case. The first addresses manually entering receipt information, shopper card information, or other such free text field is traditionally error prone and time intensive for a variety of information. This AI system provides a human-in-the-loop opportunity to review and update data entry points while reducing time spent by staff to gain these insights. Having a set structured output as well increased standardization of this information and eases reporting in situations requiring cooperation from multiple organizations. The second benefit revolves around using the summarization capability, the extensive process of mapping common names of different food items is done automatically, greatly reducing the human labor time to generate dashboards of information regarding current foodborne investigations to serve as a decision point to aid in outbreak response. | The Artificial Intelligence Platform (AIP) available within SEDRIC provides CDC epidemiologists the power to accelerate their investigations of multi-state foodborne disease outbreaks. It can extract structured data from grocery receipts, shopper card records, and free-text responses in order to catalog the food items purchased by affected patients. It can also map those items to SEDRIC-defined vehicles which categorize the items and highlight commonalities across patients, helping to pinpoint potential outbreak vehicles. AIP can summarize these results to provide insights from information pulled from shopper receipts. Given ingredients can be found in multiple food products, and some ingredients such as herbs like coriander/cilantro may go by multiple names or be reported in multiple languages, this summarization tool provides a faster way to gather summary information from receipts on different food items which may be part of a foodborne investigation. | 23/10/2026 | c) Developed with both contracting and in-house resources | Palantir Technologies | Yes | The Artificial Intelligence Platform (AIP) available within SEDRIC provides CDC epidemiologists the power to accelerate their investigations of multi-state foodborne disease outbreaks. It can extract structured data from grocery receipts, shopper card records, and free-text responses in order to catalog the food items purchased by affected patients. It can also map those items to SEDRIC-defined vehicles which categorize the items and highlight commonalities across patients, helping to pinpoint potential outbreak vehicles. AIP can summarize these results to provide insights from information pulled from shopper receipts. Given ingredients can be found in multiple food products, and some ingredients such as herbs like coriander/cilantro may go by multiple names or be reported in multiple languages, this summarization tool provides a faster way to gather summary information from receipts on different food items which may be part of a foodborne investigation. | Data are used in outbreak/response scenarios, such as foodborne illness outbreak response. Data used is dependent on the situation and outbreak, and may be owned by CDC, FDA, USDA, State Health Departments, Tribal Health Departments, Local Health Departments, Territorial Health Departments, or other entities. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Genetic distance computation method for comparing complex multi-locus parasite (Cyclospora) genotypes | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Investigating the similarity of infections for epidemiologic investigations of cyclosporiasis outbreaks. The method enables clustering and comparison of complex genotypes, which are too large and complex for traditional methods, to identify related infections during outbreak tracking. | The AI enables analysis of massive genotype datasets, facilitating rapid and accurate identification of infection clusters. This supports epidemiologic investigations and traceback for cyclosporiasis and other parasites, improving outbreak response and public health interventions. | The system outputs genetic distance matrices and clusters of closely related infections, based on comparisons of haplotypes from clinical samples. These outputs are used to complement epidemiologic investigations and traceback activities. | 19/09/2026 | b) Developed in-house | Yes | The system outputs genetic distance matrices and clusters of closely related infections, based on comparisons of haplotypes from clinical samples. These outputs are used to complement epidemiologic investigations and traceback activities. | Cyclospora sequence data generated by CDC, State Public Health Labs, and the Public Health Agency of Canada, following a CDC-developed protocol for 8 genotyping markers. All CDC and State Public Health Labs sequence data are publicly available via NCBI (see below). | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | MedCoder - Coding literal text cause of death information reported on death certificates to ICD-10 | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Automating the coding of literal text causes of death from death certificates to ICD-10 codes, improving accuracy, efficiency, and timeliness of mortality data for public health surveillance. | MedCoder increased the percentage of deaths that can be automatically and accurately coded from 70-75% to over 85%, resulting in substantial cost savings (hundreds of thousands of dollars) and significantly enhancing the timeliness of data for urgent public health concerns (e.g., COVID, drug overdose deaths), enabling near real-time surveillance. | MedCoder outputs ICD-10 cause of death codes from literal text on death certificates. It also flags complex or frequently miscoded cases for manual review. The system uses NLP to cleanse and standardize input text before coding. | 22/06/2026 | b) Developed in-house | Yes | MedCoder outputs ICD-10 cause of death codes from literal text on death certificates. It also flags complex or frequently miscoded cases for manual review. The system uses NLP to cleanse and standardize input text before coding. | Death certificate literal text data, including cause of death statements, and associated demographic information such as sex. Documentation for model training and evaluation data is widely available. | No | b) Sex | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | NCIRD SmartFind ChatBots - Public and Internal | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Improving efficiency and effectiveness of internal partner mailbox email management and knowledge base maintenance for program staff, and previously, providing public-facing answers to FAQs. | The internal Knowledge-Bots SharePoint component helps program staff manage partner emails more efficiently and effectively, enabling shared knowledge base use across mailbox managers. The public-facing chatbots previously provided timely, agency-cleared answers to public and partner questions, supporting rapid information dissemination during the COVID-19 pandemic. | Conversational ChatBots that analyze free text questions and provide agency-cleared answers that best match the question. The system also flags complex or unanswerable queries for manual review. | 24/12/2026 | c) Developed with both contracting and in-house resources | Yes | Conversational ChatBots that analyze free text questions and provide agency-cleared answers that best match the question. The system also flags complex or unanswerable queries for manual review. | Public-facing FAQs and other agency-reviewed information accessible publicly were used as the knowledge base for the public-facing chatbots. Internal chatbot uses internal knowledge base content. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | NIOSH Industry and Occupation Computerized Coding System (NIOCCS) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Replacing manual coding of industry and occupation text with automated coding to standardized codes, reducing cost and increasing speed, accuracy, and consistency for research and analysis. | Reduces the high cost of manual coding, promotes increased coding speed, accuracy, and consistency, and enables more efficient use of industry and occupation data for research and analysis. | Standardized industry and occupation codes generated from free-text input, suitable for research and analysis. | 24/01/2026 | b) Developed in-house | Yes | Standardized industry and occupation codes generated from free-text input, suitable for research and analysis. | STLT's death record data (received via NCHS) and BRFSS survey data. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Nowcasting Injury Trends | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Providing real-time estimates of injury and death trends to enhance situational awareness and expedite surveillance and research activities, especially when gold standard data are delayed. | Enables timelier identification and investigation of emerging injury trends, improving the speed and effectiveness of public health surveillance and response when gold standard data are not yet available. | An internal-facing, interactive dashboard that provides week-to-week national nowcasts of injury death trends, using multiple traditional and non-traditional datasets and a multi-stage machine learning pipeline. | 22/01/2026 | b) Developed in-house | Yes | An internal-facing, interactive dashboard that provides week-to-week national nowcasts of injury death trends, using multiple traditional and non-traditional datasets and a multi-stage machine learning pipeline. | Emergency Department data from the National Syndromic Surveillance Program. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Risk Assessment Module (RAM) for the National Diabetes Prevention Program (National DPP) Operations Center. | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | To assist in determining if an organization participating in the National DPP is at risk of improperly starting, going inactive, or not achieving necessary goals for continued participation in CDCs Diabetes Prevention Recognition Program. | The RAM module helps program managers synthesize large amounts of organization-level data to make informed decisions on assisting organizations, leading to increased program participation and improved health outcomes. | The RAM is a reporting tool that ingests organization-level data (including participant enrollment, demographics, and risk factors) to generate a ranked list of organizations at highest risk of failing to meet program objectives. Outputs are currently restricted to CDC associates, with plans for future access by State Quality Specialist users. | 24/08/2026 | c) Developed with both contracting and in-house resources | Yes | The RAM is a reporting tool that ingests organization-level data (including participant enrollment, demographics, and risk factors) to generate a ranked list of organizations at highest risk of failing to meet program objectives. Outputs are currently restricted to CDC associates, with plans for future access by State Quality Specialist users. | Historical data from organizations 6-month submissions of participant attendance in Lifestyle change classes, sourced from the DDT DPRP Portal. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Semi-Automated Nonresponse Detection for Surveys (SANDS) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Manual review of open-ended survey responses is labor-intensive and cost-prohibitive at scale. SANDS automates the detection of nonresponses in survey data, reducing the burden on researchers and improving data quality. | SANDS significantly reduces manual curation time for open-ended survey responses by providing automated scoring and flagging of nonresponses. This enables faster compilation of high-quality datasets for qualitative research and streamlines the review process for researchers. | The system outputs scores for open-ended survey responses, identifying likely nonresponses and flagging responses that require further review. This helps improve survey data quality and informs questionnaire design. | 22/09/2026 | b) Developed in-house | No | The system outputs scores for open-ended survey responses, identifying likely nonresponses and flagging responses that require further review. This helps improve survey data quality and informs questionnaire design. | 3,000 labeled open-ended responses to web probes on questions relating to the COVID-19 pandemic, gathered from the Research and Development Survey (RANDS) conducted by the Division of Research and Methodology at the National Center for Health Statistics. | Yes | k) None of the above | Yes | https://huggingface.co/NCHS/SANDS | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Sequential Coverage Algorithm (SCA) and partial Expectation-Maximization (EM) estimation in Record Linkage | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | To improve the accuracy and efficiency of record linkage in CDCs National Center for Health Statistics (NCHS) Data Linkage Program, particularly for large datasets, by automating the development and selection of blocking groups and reducing manual effort. | Increased accuracy and efficiency in data linkage Automation reduces manual effort and increases scalability Machine learning algorithms adapt and improve over time, refining linkage processes Enables researchers to better examine factors influencing disability, chronic disease, health care utilization, morbidity, and mortality | Development of joining methods (blocking groups) for large datasets Estimation of the proportion of matched pairs within each block Improved linkage accuracy and efficiency | 20/08/2026 | c) Developed with both contracting and in-house resources | Yes | Development of joining methods (blocking groups) for large datasets Estimation of the proportion of matched pairs within each block Improved linkage accuracy and efficiency | Data from the National Hospital Care Survey, the National Health and Nutrition Examination Survey, the National Health Interview Survey, and linked administrative data. | Yes | b) Sex | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | TowerScout: Automated cooling tower detection from aerial imagery for Legionnaires' Disease outbreak investigation | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Rapid identification of cooling towers that could potentially be spreading Legionella bacteria during outbreaks, enabling faster and more efficient outbreak response. | Detects cooling towers approximately 600 times faster than manual searches. Enables more efficient and timely response during Legionella outbreaks. Improves public health response and outbreak containment. | Detection and classification of cooling towers within aerial imagery. | 21/05/2026 | c) Developed with both contracting and in-house resources | Yes | Detection and classification of cooling towers within aerial imagery. | Aerial imagery data used for object detection and image classification. | No | k) None of the above | Yes | https://github.com/TowerScout/TowerScout | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Assessing Large Language Models for Synthetic Survey Data Generation | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Survey data de-identification is crucial for the NCHS to maximize data utility while protecting privacy, but determining and applying modern best practices requires further research. NCHS conducts national surveys and releases microdata (data containing information about individuals) for public use. To protect survey participants confidentiality, statistical disclosure limitation techniques have been used to de-identify data, but these methods have drawbacks of losing statistical properties of the original data and thus limiting useful analyses. Additionally, these methods are not designed for very large data or text data. Use of synthetic data may offer another option. The goal of synthetic data is to preserve essential statistical features and variable relationships of the original data such that statistical inference based on the synthetic data is close to that of the original data. Large language models (LLMs) may be able to address limitations of statistical methods for synthetic data creation, especially for natural language data. We aim to advance knowledge of this application of LLMs to enable staff to select the optimal tools for synthetic data generation. | Current statistical methods for synthetic data generation have drawbacks such as difficulty handling very large datasets, steep learning curve for people with less statistics or coding background, and inability to generate natural language data. Thus, if LLMs are evaluated to be successful at synthetic survey data generation, this alternative method would enable more data synthesis at scale, more data synthesis that can be conducted by staff with various levels of statistics backgrounds, and the first ever release of synthetic survey text data. | Continuous, categorical, and free text data that matches properties of original survey data. | Continuous, categorical, and free text data that matches properties of original survey data. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Development of in-silico genomic and patient datasets using generative ML algorithms | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | GenAIMeta: Generative AI CDC Metadata Query Application | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | Metadata plays a crucial role in enhancing public understanding and usage of CDC data. Usable metadata are essential not only for making data easy to find, understand, and use on data.cdc.gov but also for synchronizing with other federal catalogs. Metadata on data.cdc.gov spans 1,056 datasets with ~20 fields each, syncing nightly with federal catalogs. Manual validation, normalization and monitoring of this volume, and the inconsistent quality and completeness of those fields creates bottlenecks for data discovery, governance and downstream analytics. The aim is to leverage EDAVs Azure OpenAI-powered models to automate metadata validation, standardization and monitoring at scale, replacing error-prone manual checks with real-time, AI-driven oversight. | Objective: Phase 1- Automate metadata validation and monitoring on data.cdc.gov using EDAVs Azure OpenAI API Phase 2 - Make useful AI agents that are user centric and make application of the tool broader with tested agent efficacy Evaluation: Approach: Asked both general and domain-trained models the same set of real-world questions. Benchmarked responses on four metrics: Accuracy: Match to ideal answers Relevance: Alignment with user needs Clarity: Readability and actionability Completeness: Coverage of all aspects of the question Results: Domain-trained models outscored general models on every metric. Trained models delivered more precise, context-aware, and fully-formed answers. General models tended toward vague or overly broad responses. Conclusion: Targeted, domain-specific training significantly boosts an LLMs ability to meet specialized user requirements. Key Benefits: Actionable insights for better decision-making during time-sensitive scenarios. Optimized resource allocation for improved efficiency. Enhanced trust in decision-making frameworks through consistent performance. | Phase -1 Implementation ? EDAVs Azure AI Infrastructure Ingest and preprocess data from data.cdc.gov Extract vector embeddings for model training Build and fine-tune LLM and ML models focused on metadata usage and quality monitoring ? Monitoring Dashboard Connects directly to Azure AI outputs Provides real-time data-quality checks and metadata health metrics Features interactive interfaces for key metrics and insights Phase-2 Implementation ? Domain-Specific Model Training Built a targeted dataset of real-world questions with paired ideal answers Fine-tuned and evaluated models against accuracy, relevance, clarity and completeness benchmarks ? Multi-User, Multi-Agent Framework Deployed specialized agents for distinct roles Enabled simultaneous support for diverse users including data-quality managers, data scientists, epidemiologists, etc. ensuring scalable, task-focused collaboration | 25/01/2026 | c) Developed with both contracting and in-house resources | Yes | Phase -1 Implementation ? EDAVs Azure AI Infrastructure Ingest and preprocess data from data.cdc.gov Extract vector embeddings for model training Build and fine-tune LLM and ML models focused on metadata usage and quality monitoring ? Monitoring Dashboard Connects directly to Azure AI outputs Provides real-time data-quality checks and metadata health metrics Features interactive interfaces for key metrics and insights Phase-2 Implementation ? Domain-Specific Model Training Built a targeted dataset of real-world questions with paired ideal answers Fine-tuned and evaluated models against accuracy, relevance, clarity and completeness benchmarks ? Multi-User, Multi-Agent Framework Deployed specialized agents for distinct roles Enabled simultaneous support for diverse users including data-quality managers, data scientists, epidemiologists, etc. ensuring scalable, task-focused collaboration | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Detecting, evaluating, and redacting PII in NAMCS HC Component | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | How effective are open-source PII detection models in identifying and redacting PII? The National Center for Health Statistics (NCHS) Division of Health Care Statistics has collected millions of health records with laboratory results from encounters at health centers via the National Ambulatory Medical Care Survey (NAMCS), Health Center (HC) Component. Due to inadvertent errors during data entry and processing, some records contain identifiers (e.g. names, locations, etc.) in non-PII fields. Due to the PII, certain fields cannot be made available for restricted or public use, but reviewing millions of records for PII is not practical. | If the process is feasible, it will significantly increase the healthcare lab data available to researchers for analysis. Additionally, the process could be applied to additional tables and years of data, increasing overall data availability. | A semi-automated process to conduct a quality control review of health data records, including potential PII records flagged for manual review. | A semi-automated process to conduct a quality control review of health data records, including potential PII records flagged for manual review. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Retrieval Augmented Generation (RAG) with Q-Bank | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Using generative AI to gain insight of older adult falls | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Nowcasting Burden and Infection Trends for Seasonal and Epidemic Pathogens | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Emergency Management | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Improve real-time estimates of disease burden and infection trends for better situational awareness for planning and decision-making at the national, state, and local level. Traditional AI/ML models (e.g. time series models) are mainly used as baselines against which to test and improve more sophisticated modeling methods. | Providing timely, accurate, and actionable information on current and near-future disease risk and effort required for control to government officials and the public. | Current outputs include weekly state-level estimates of the time-varying reproductive number (Rt) a measure of epidemic trajectory and indicator of the level of effort needed to bring an epidemic under control for COVID-19 and influenza (public-facing) and RSV (internal to CDC at this time), and weekly nowcasts of hospital admissions within the Respiratory Virus Hospitalization Surveillance Network (RESP-NET; internal to CDC at this time). | 23/11/2026 | c) Developed with both contracting and in-house resources | Yes | Current outputs include weekly state-level estimates of the time-varying reproductive number (Rt) a measure of epidemic trajectory and indicator of the level of effort needed to bring an epidemic under control for COVID-19 and influenza (public-facing) and RSV (internal to CDC at this time), and weekly nowcasts of hospital admissions within the Respiratory Virus Hospitalization Surveillance Network (RESP-NET; internal to CDC at this time). | Internal and publicly available hospital admissions data collected through the Respiratory Virus Hospitalization Surveillance Network (RESP-NET), internal and publicly available emergency department visit data collected through the National Syndromic Surveillance Program (NSSP) | No | Race/Ethnicity; Sex; Age | Yes | o Current methods used by CFA: https://github.com/epiforecasts/EpiNow2 o Current CFA deployment pipeline: https://github.com/CDCgov/cfa-epinow2-pipeline o Methods in development by CFA: https://github.com/CDCgov/cfa-gam-rt | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Automating the distribution of CDC-State Department cables using AI models in the Global Health Center | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Agentic AI | The output from the AI system is the automated and timely distribution of CDC-State Department cables. It delivers accurate, formatted messages directly to the intended recipients within seconds, ensuring fast and reliable communication without manual intervention. | This AI use case reduces cable distribution time from 24 hours to 30 seconds, greatly speeding up communication. It saves staff time, improves accuracy, and helps the CDC respond faster to health threats, supporting its mission to protect public health effectively. | The output from the AI system is the automated and timely distribution of CDC-State Department cables. It delivers accurate, formatted messages directly to the intended recipients within seconds, ensuring fast and reliable communication without manual intervention. | 25/05/2026 | c) Developed with both contracting and in-house resources | Yes | The output from the AI system is the automated and timely distribution of CDC-State Department cables. It delivers accurate, formatted messages directly to the intended recipients within seconds, ensuring fast and reliable communication without manual intervention. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | DHDSP NOFO Technical Assistance Chatbot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The Division for Heart Disease and Stroke Prevention (DHDSP) experienced numerous challenges in coordinating with grantees/recipients to support questions related to their programs. These challenges often resulted in inefficiencies such as inconsistent communication, limited accessibility to data, potential inaccuracies in responses, and delayed responses. To address these issues, the TA Chatbot was developed to reduce the administrative burden on staff related to their assigned Technical Assistance (TA) case load. | The chatbot is trained on hundreds of DHDSP NOFO specific and HHS policy documents that PDSB Project Officers, PDSB Data Team, and AREB Evaluation TA Providers would have to search through to find answers to recipient questions. Use of this chatbot will save hundreds of hours of staff time so they can focus on other tasks to support DHDSP-funded recipients. | The Technical Assistance chatbot incorporates a Large Language Model (LLM) AI to provide quick, accurate, plain-language answers to questions on grants policy and program processes, protocols, and requirements. | 24/08/2026 | c) Developed with both contracting and in-house resources | Yes | The Technical Assistance chatbot incorporates a Large Language Model (LLM) AI to provide quick, accurate, plain-language answers to questions on grants policy and program processes, protocols, and requirements. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Rapid Detection of Acute Releases of Toxic Substances (RaDARTS) | a) Pre-deployment The use case is in a development or acquisition status. | Emergency Management | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Reporting on Acute Releases of Toxic substances involves review of new media sources and is a highly labor intensive process which may involve missed content. The goal is to rapidly ingest, categorize, summarize, and store data from news media sources to inform situational awareness and surveillance of Acute Releases of Toxic Substances. | This project aligns with the NCEH/ATSDR Strategic Framework to: Monitor and effectively respond to environmental public health hazards, emergencies, and threats that affect domestic and international health security and build appropriate capacity within state, local, territorial and tribal communities. This project will significantly reduce burden on staff, the time it takes to review data, improve the timeliness of information, and all in a cost-effective manner. | Data points such as the number of people injured, the number of fatalities, and any public health actions associated with the events(e.g. shelter in place, evacuation) | Data points such as the number of people injured, the number of fatalities, and any public health actions associated with the events(e.g. shelter in place, evacuation) | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | DLS AI Assistant tool | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Regulatory laboratory compliance has become increasingly complex, requiring scientists, lab personnel, and managers to understand laboratory quality regulations like the Clinical Laboratory Improvement Amendments (CLIA). The DLS AI Assistant is a tool designed to help scientists and lab workers by providing relevant guidance, such as the internal DLS Policy and Procedures Manual (DLS PPM) and external CLIA and International Organization for Standardization (ISO) regulations for laboratory quality. It offers personalized responses based on the user's level of expertise. In the final stage of the project, we plan to add a feature that will automatically evaluate Standard Operating Procedures (SOPs) to check if they comply with all relevant documents. However, the DLS AI Assistant is meant to assist and does not dictate compliance. | We estimate that 5-10% of all time spent within the DLS is focused on compliance efforts, such as documentation, training sessions, and audit preparations and participation. The DLS AI Assistant tool supports these quality improvement efforts by helping staff understand and follow laboratory regulations more efficiently. The goal is to streamline the review of compliance efforts without compromising quality. Additional benefits include increased harmonization and reduced time spent on evaluating edge cases. For the CDC, this tool adds another layer of checks and balances and enhances knowledge sharing, ultimately leading to better and more accurate laboratory methods. | Text-based information includes regulatory compliance details from both internal sources, such as the DLS Policy and Procedures Manual (DLS PPM), and external sources, like CLIA and International Organization for Standardization (ISO) regulations for laboratory quality. | 25/05/2026 | c) Developed with both contracting and in-house resources | Yes | Text-based information includes regulatory compliance details from both internal sources, such as the DLS Policy and Procedures Manual (DLS PPM), and external sources, like CLIA and International Organization for Standardization (ISO) regulations for laboratory quality. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | PDF information extraction for 889 Forms | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Computer Vision | Document processing and extraction of text from required 889 compliance forms (PDFs and/or image files) into a searchable cloud-based database. Eliminating manual data entry and providing a more transparent log of compliance related information in a repository for users to check, verify, and review. | The expected benefits include greater transparency regarding vendor compliance. Users have an easier way to look up and view vendor information across the center to eliminate duplicate request for vendor compliance. Users and staff will save time and potentially have less human error compared to manual data entry. | The 889 Document Processor is an Optical Character Recognition (OCR) model built using Microsoft AI Builder. The 889 Form Repository applies a Power Automate Flow, which triggers when a user uploads an 889 Form into the repository (SharePoint Library), applying the 889 Document Processor to read and recognize the text on the form (both print or handwritten) and extracts the text into a formatted SharePoint list. | 25/04/2026 | b) Developed in-house | No | The 889 Document Processor is an Optical Character Recognition (OCR) model built using Microsoft AI Builder. The 889 Form Repository applies a Power Automate Flow, which triggers when a user uploads an 889 Form into the repository (SharePoint Library), applying the 889 Document Processor to read and recognize the text on the form (both print or handwritten) and extracts the text into a formatted SharePoint list. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | A-Z Index of Tox Profiles | Toxicological Profiles | ATSDR https://www.atsdr.cdc.gov/toxicological-profiles/glossary/index.html | ATSDR Toxicological Assistant chatbot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Generative AI | The ATSDR Toxicological Assistant chatbot via the CDC Chatbot has access to query 180 comprehensive toxicological profiles (A-Z Index of Tox Profiles | Toxicological Profiles | ATSDR). The chatbot can assist users in accessing toxicological data, answering questions about specific chemicals, providing information on exposure pathways, and offering guidance on health assessments related to environmental contaminants. This tool can significantly reduce research time, ultimately enhancing efficiency in accessing critical toxicological information. | This tool significantly reduce research time, ultimately enhancing efficiency in accessing critical toxicological information to effectively and rapid response to general public enquire relative to chemical exposure through to ATSDR Info. | Substance Lookup: Summarizes health risks, exposure pathways, and toxicological data from a pre-built library of over 180 ATSDR toxicological profiles. Interactive Q&A: Generates answers to user questions based solely on information within the extensive toxicological profiles, ensuring accuracy and reliability. Navigation Support: Effectively guides users to specific chapters, tables, pages, and references within lengthy toxicological documents. Comparative Analysis: Enables users to compare the health effects of different substances, facilitating comprehensive environmental research and exposure assessments. Document Generation: Assists in creating documents and reports tailored to different reading levels, supporting health consultations and public communication. | 25/04/2026 | c) Developed with both contracting and in-house resources | Yes | Substance Lookup: Summarizes health risks, exposure pathways, and toxicological data from a pre-built library of over 180 ATSDR toxicological profiles. Interactive Q&A: Generates answers to user questions based solely on information within the extensive toxicological profiles, ensuring accuracy and reliability. Navigation Support: Effectively guides users to specific chapters, tables, pages, and references within lengthy toxicological documents. Comparative Analysis: Enables users to compare the health effects of different substances, facilitating comprehensive environmental research and exposure assessments. Document Generation: Assists in creating documents and reports tailored to different reading levels, supporting health consultations and public communication. | A-Z Index of Tox Profiles | Toxicological Profiles | ATSDR https://www.atsdr.cdc.gov/toxicological-profiles/glossary/index.html | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Machine Learning with Premier Healthcare Data to inform predictive modeling of antibiotic use | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Surveillance data, including the Emerging Infections Program (EIP) and the National Healthcare Safety Network (NHSN) can be linked to and other inpatient data to better characterize risk factors and outcomes of important healthcare associated infections (HAI) and antimicrobial resistance (AR). However, discharge data often lacks detailed information about inpatient antibiotic use. | This project uses the Premier Healthcare Database (PHD), an electronic health database, to predict inpatient antibiotic use and length of therapy using data readily available in claims and other electronic health record databases. This adds additional potential sources of information to support insights. | These models will allow us to fill in gaps in antibiotic use information in Medicare claims and discharge datasets to better leverage EIP and NHSN data to better understand how cumulative antibiotic use may impact patients risk for HAIs and AR infections. | 25/04/2026 | b) Developed in-house | Yes | These models will allow us to fill in gaps in antibiotic use information in Medicare claims and discharge datasets to better leverage EIP and NHSN data to better understand how cumulative antibiotic use may impact patients risk for HAIs and AR infections. | No | Race/Ethnicity; Sex; Age | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | https://www.nature.com/articles/s41598-024-76089-3 | Machine Learning Techniques for Early Detection and Situational Awareness of Rabies Outbreaks | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Rabies is enzootic in wildlife and modern-day surveillance techniques are too weak to fully capture the geographic extent of outbreaks nor early outbreak detection. Our ML algorithm uses public health surveillance data to "fill in the gaps" inherent in wildlife disease surveillance programs to accurately and rapidly detect outbreaks and deploy public health resources. | This model is currently being using in domestic and international settings for early rabies outbreak detection. This information is shared with relevant public health authorities to initiate preventive actions which often include: public awareness campaigns/social media, deployment of vaccines for animals and people, deployment of testing reagents to bolster surveillance. | Disease trend for real-time monitoring of rabies, Probabilities of disease occurrence over time and space. Spatiotemporal clustering with tiered risk classification differentiates stable circulation from emerging rabies transmission, improving situational awareness and guiding seasonally targeted surveillance and interventions, underscoring the need for real-time data sharing to strengthen outbreak response. | 25/01/2026 | b) Developed in-house | Yes | Disease trend for real-time monitoring of rabies, Probabilities of disease occurrence over time and space. Spatiotemporal clustering with tiered risk classification differentiates stable circulation from emerging rabies transmission, improving situational awareness and guiding seasonally targeted surveillance and interventions, underscoring the need for real-time data sharing to strengthen outbreak response. | https://www.nature.com/articles/s41598-024-76089-3 | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | AI-Powered web scanner for digital surveillance of rabies-related news articles | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Agentic AI | US travelers to international destinations are often exposed to rabies, and in some cases have died upon return to the US. Accurate and early identification of rabies outbreaks can help inform US travelers pre-travel healthcare and vaccine decisions. Unfortunately, surveillance for and transparency of rabies outbreaks in international settings is unreliable and rarely reported through official government channels. | This scanner offers a low-resource method of scanning media for evidence of rabies outbreaks that jeopardize US traveler's health, faster and more reliably than relying on formal notifications or announcement from foreign governments. | Daily automated compilation of news reports with potential outbreaks, high-risk rabies exposures, species involvement, etc. | 25/05/2026 | b) Developed in-house | No | Daily automated compilation of news reports with potential outbreaks, high-risk rabies exposures, species involvement, etc. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Pathogen strain characterization from mixed strain samples | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We use the DNA sequences at specific places in pathogen genomes to create a "DNA fingerprint" that allows us to link cases of diarrheal illness and identify potential foodborne outbreaks. When a single patient has more than one strain of the same pathogen (e.g. two pathogenic E. coli), the pieces of the DNA fingerprint get mixed together in the sample and make the data unusable for outbreak surveillance. Our ML-based method is intended to sort the pieces of the DNA fingerprints into separate strains and make this data usable for foodborne outbreak surveillance. | Surveillance for diarrheal foodborne outbreaks currently depends upon the availability of bacterial isolates obtained from patient stools to obtain the pathogen "genomic fingerprints" identifying pathogen strains. The availability of these isolates for fingerprinting is declining nationwide due to technological advancements that improve patient care. To maintain our ability to detect outbreaks without isolates, CDC is developing laboratory methods that obtain the pathogen genomic fingerprint directly from the patient stool specimen. However, patient stools frequently contain more than one strain of pathogen, so the ability to deploy these methods and maintain the sensitivity of foodborne outbreak surveillance is dependent upon development of this ML-based method to sort pathogen genomic fingerprint pieces from stool. Based on FoodNet data, we estimate that failure to implement these methods could lead to the loss of up to 75% of the samples currently captured by surveillance for some pathogens. Fewer surveillance samples will mean fewer outbreaks are detected and it will take longer to detect them, resulting in more people affected. For a sense of the scale of the challenge, NORS recorded ~300 outbreaks of Salmonella and E. coli in 2023 that were detected as a result of isolate-based surveillance. Economic impact evaluations have estimated that PulseNet surveillance alone prevents ~270,000 cases of foodborne illness in the US annually for a savings of at least $500,000,000 to the economy. | Our ML-based method 1) predicts the number of strains of a pathogen found in a single sample, 2) reports the DNA fingerprint defining each strain, and 3) gives the likelihood that two samples contain the same pathogen strain. | Our ML-based method 1) predicts the number of strains of a pathogen found in a single sample, 2) reports the DNA fingerprint defining each strain, and 3) gives the likelihood that two samples contain the same pathogen strain. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | AMD-Platform Data Harmonization | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Current Metadata may not match existing values, decreasing metadata quality. The harmonization of submitted deviant metadata to best match our current values to ensure accurate metadata is the goal, with an example being Mtb is converted to Mycobacterium tuberculosis. | Standardize datasets for analysis, increased quality of metadata and reduced processing time. Developed to reduce time for staff implementing Metadata submissions. | Updated dataset with standardized Metadata with improved data quality. | Updated dataset with standardized Metadata with improved data quality. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Text embedding analysis tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Reviewing large sets of documents can take hours to identify similar clusters or analysis. The tool helps non-technical staff explore large, text-based datasets by generating clusters of text and identifying similar documents. | One benefit is being able to get a quick-but-principled analytic overview of a (potentially very large) text corpus's semantic content. This may yield time savings for tasks like responding to public inquiries and doing qualitative analyses of unstructured datasets. | The system generates text embeddings and then understand how their documents cluster in the embedding space. The system has no default output--it's primarily an AI-enabled canvas for drawing the embedding space and helping users explore the space rigorously. Users may, however, choose to export the embeddings, cluster assignments, or modified source datasets for use in other downstream analyses. | 25/01/2026 | b) Developed in-house | Yes | The system generates text embeddings and then understand how their documents cluster in the embedding space. The system has no default output--it's primarily an AI-enabled canvas for drawing the embedding space and helping users explore the space rigorously. Users may, however, choose to export the embeddings, cluster assignments, or modified source datasets for use in other downstream analyses. | No | k) None of the above | Yes | https://github.com/scotthlee/tars | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Agentic RAG Chatbot | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | The creation of accurate, detailed, and factually-grounded draft responses to public inquiries takes a large amount of time and could benefit from assistance from LLMs. Because of how complex the responses to the inquiries can be, LLMs alone did not perform well enough to be useful in production, so we decided to build a more advanced chatbot that uses a small team of agents to refine the inquiries, decide what source data to use for grounding, and generate higher-quality draft responses to the inquiries that staff can use as a starting place for writing their replies. | Time savings, especially when programs are overwhelmed by acute increases in the volume of inquiries they receive after publishing a new guideline or regulation is the primary expected benefit. | Draft responses to an inquiry to serve as a base for staff. The inquiry draft will follow all CDC public health research, data, and recommendations based on the best science currently available. Inquiry responses will go through a separate agency review process prior to release. | Draft responses to an inquiry to serve as a base for staff. The inquiry draft will follow all CDC public health research, data, and recommendations based on the best science currently available. Inquiry responses will go through a separate agency review process prior to release. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Internal Newsletter Formatter Tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This project addresses the challenge of efficiently summarizing and formatting lengthy newsletter documents to provide leadership with clear, concise, and actionable information. | Expected to take less time summarizing and disseminating important information for leadership. | Cleaned and formatted newsletter for internal leadership staff. Output is edited and reviewed by communications staff prior to dissemination. | 25/06/2026 | b) Developed in-house | Yes | Cleaned and formatted newsletter for internal leadership staff. Output is edited and reviewed by communications staff prior to dissemination. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | https://www.ncbi.nlm.nih.gov/datasets/genome/ | Determining if multiple isolates share the same antimicrobial resistant plasmids using short read whole genome sequencing data | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Determining if multiple isolates share the same antimicrobial resistant plasmids using short read whole genome sequencing data. | This will help in the early identification of outbreaks of antimicrobial resistant healthcare associated infections caused by the horizontal transmission of plasmids, which will help to increase the speed at which outbreaks are detected and addressed, potentially decreasing cases saving lives. | A probability estimating whether multiple isolates share the same antimicrobial resistant plasmid. | 23/12/2026 | b) Developed in-house | No | A probability estimating whether multiple isolates share the same antimicrobial resistant plasmid. | https://www.ncbi.nlm.nih.gov/datasets/genome/ | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Analyzing multidrug-resistant organism response data | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Free text data are collected as part of internal tracking of Antimicrobial Resistance. These data are difficult to clean, categorize, and analyze. | Cleaner and more accurate data on multidrug-resistant organism responses. | Categorized free text data related to Antimicrobial Resistance. This will improve the quality and usability of existing data. | Categorized free text data related to Antimicrobial Resistance. This will improve the quality and usability of existing data. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Leveraging AI for the Creation of Synthetic Datasets | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Generative AI | We aim to generate synthetic datasets that will support training, testing, and software development improving the overall development process | The use of an AI-assisted programming environment can significantly reduce the time required to write code for generating synthetic datasets. This efficiency allows us to quickly create the necessary variables and establish the relationships between them. The synthetic data produced through this project will enhance the training experience and streamline the software development process. | The primary outputs from the AI system will include synthetic datasets specifically designed to simulate Healthcare-Associated Infection (HAI) data. | 25/04/2026 | c) Developed with both contracting and in-house resources | No | The primary outputs from the AI system will include synthetic datasets specifically designed to simulate Healthcare-Associated Infection (HAI) data. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | DHP Data Repository and Dashboard Chatbot | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | HP uses data to drive action for preventing and addressing HIV in the United States (U.S.). Historically, DHPs data sources have not been easily accessible internally due to a siloed organizational structure. The DHP Data Repository and Dashboard aims to break down silos by co-locating and visualizing data from across the Division. Currently the repository and dashboard focus on the management and presentation of numeric data. By integrating advanced analytics and natural language processing capabilities into the existing repository we can increase the efficiency, accessibility, and usability of narrative information that is collected in DHP. | Expected benefits include easier and more automated processing of narrative information received in DHP leading to time saved by staff currently processing the information and more efficient and easier use of and access to the information. One of the overarching goals of the DHP repository and dashboard project is to create a mechanism for Division leadership to make informed decisions more easily through easier access to information across work areas. The chatbot is expected to support this by providing a combined source of narrative information with an approachable user interface. | The expected output from this AI focused project is to have a chatbot with a user interface with the model behind it utilizing narrative information specific to DHP. Users will be able to ask questions at the national, regional, or state level and receive answers to questions based on the information in the narrative documents. | The expected output from this AI focused project is to have a chatbot with a user interface with the model behind it utilizing narrative information specific to DHP. Users will be able to ask questions at the national, regional, or state level and receive answers to questions based on the information in the narrative documents. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | HIV Data Quality Score (DQS) Project | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Currently, 60 health departments and 150 community-based organizations submit National HIV Monitoring and Evaluation (NHM&E) program data using a standard online form. However, this data often contains errors that require significant manual cleanup by CDC's HIV data managers. To address this issue, our project aims to create a large language model (LLM)-based data quality score capable of detecting errors in datasets and measuring dataset cleanliness levels. LLMs can also be utilized to automatically fix some detected errors. | This project intends to enable HIV data managers to quickly identify errors, track trends in data quality by site, provide targeted technical assistance (TA), and automate some error corrections. | The outputs include both a List of identified erroneous data fields in a dataset and a Dataset with some errors automatically corrected. This will be available for future evaluation efforts. | The outputs include both a List of identified erroneous data fields in a dataset and a Dataset with some errors automatically corrected. This will be available for future evaluation efforts. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | NCHHSTP Social Media Modernization Strategy: Thought Leadership and Social Listening | a) Pre-deployment The use case is in a development or acquisition status. | Emergency Management | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To improve communication and ensure health messaging on social media platforms is responsive to the needs of the American people. AI-powered social listening tools are increasingly used in public health to analyze public conversations and audience sentiment in real time. We are using AI social listening tools to collect and analyze publicly available social media data. We then meet with a subject matter expert to discuss this information. Insights from the meeting with the SME and KPI data obtained natively on social platforms are used to draft, publish, analyze, and optimize social media content. | We are demonstrating how integrating AI-powered social listening into a communication strategy enhances audience engagement by enabling the creation of targeted content that addresses audience concerns and contributes factual, clinical information to trending conversations. NCHHSTP messages informed by AI-driven social listening data already show a significantly higher engagement rate and impact than those created without this approach. Weekly and monthly social listening reports also indicate an increase in the positive market share of online conversations, particularly during the promotion of NCHHSTPs updated guidelines and public comment periods. Notably, the engagement rate of one of our Centers social media accounts significantly increased from 0.017% in 2023 to 1.92% in 2024, representing a 1,029% increase since this strategy was implemented. | Visual network/cluster maps, volume and trend charts, sentiment analysis, top topics and keywords, influencers and top sources, demographics and audience insights, custom segments and thematic analysis, reports and dashboards. | Visual network/cluster maps, volume and trend charts, sentiment analysis, top topics and keywords, influencers and top sources, demographics and audience insights, custom segments and thematic analysis, reports and dashboards. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Lectora AI Toolkit and Microbuilder authoring tools | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Using LLM to optimize National Health Interview Survey (NHIS) case note information | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The National Health Interview Survey (NHIS) employs Census field representatives (FRs) who use open text fields, referred to as case notes, to document their interactions with households during screening and interview processes. These case notes serve as a valuable resource, offering insights into the nature of these interactions and aiding in the identification of base casesinstances that may reveal significant data quality issues. Currently, the review of case notes is performed manually on a case-by-case basis, which limits opportunities for optimization. The objective of this initiative is to explore how large language models (LLMs) can enhance the efficiency and effectiveness of the case notes review process. | Utilizing large language models (LLMs) for case note reviews provides several advantages, including substantial time and cost savings, improved data quality post data collection, and the creation of more effective training programs. These enhancements not only optimize operational efficiency but also support the goals of public health organization by ensuring that high-quality data is readily available for informed decision-making. | Identifying additional problematic cases not referred by Census; examining all cases from some FR whose case was referred to confirm whether similar issues exist in other cases the FR worked on; identifying themes in the case notes like certain letters/respondent materials that are in use, problematic interview strategies, or respondent confusion with questions. | 25/03/2026 | b) Developed in-house | Yes | Identifying additional problematic cases not referred by Census; examining all cases from some FR whose case was referred to confirm whether similar issues exist in other cases the FR worked on; identifying themes in the case notes like certain letters/respondent materials that are in use, problematic interview strategies, or respondent confusion with questions. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Auto-Suggest Journal Tool | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | It can be challenging and time-consuming for NCHS staff to research journals and gather the pertinent information to guide a decision of which journal to target for publication. Important information to consider when identifying a target journal includes the journals field or subject matter, word limits, formatting and submission guidelines, whether the journal is open access, the impact factor, and acceptance rate. Existing tools use keyword searching, term frequencies, and word similarity scoring to identify potential journal matches, but AI presents the potential for a more effective approach that can consider more factors. | Reduction in researcher time spent searching through journal databases and websites to identify specific information about publication requirements. | The tool will output a list of the top matching journals along with key information about each journal, such as the field or subject matter of the journal, word limit, whether the journal is open access, impact factor, and acceptance rate. | The tool will output a list of the top matching journals along with key information about each journal, such as the field or subject matter of the journal, word limit, whether the journal is open access, impact factor, and acceptance rate. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Use of National Language Processing/Machine Learning to Identify Personal Identifiers in Health Center EHR Medication Data | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The NLP/ML processes being used are attempting to identify personal identifiers within EHR data fields that capture medication information such as name of medication and dosage of medication. | The expected benefits are that with the use of these techniques to remove any personal identifiers, more medication data can be made available in restricted use data files for researchers and interested persons to analyze, which would ultimately allow more robust data for studying medications administered/present during visits to health centers. | The initial output provides lists/tables of person identifiers that were identified by this tool for review and/or removal from the medication data fields. | The initial output provides lists/tables of person identifiers that were identified by this tool for review and/or removal from the medication data fields. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | AI-Assisted Extraction of Circumstance Information from National Violent Death Reporting System Narratives | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The National Violent Death Reporting System (NVDRS) compiles both quantitative and qualitative data on the circumstances surrounding violent deaths, including homicides and suicides, from three key data sources related to each death: the death certificate, the coroner/medical examiner report (including toxicology results), and the law enforcement report from the law enforcement agency that investigated the death. Data abstractors in state health departments across the U.S. abstract relevant information about each death from CME and LE reports, creating narratives that describe the most notable circumstances that contributed to the deaths captured in the system. Thus, much of the valuable contextual information about these deaths, such as details about chronic pain, interpersonal arguments, or other contributing factors, is embedded in free-text narratives and is not routinely abstracted into structured quantitative data fields. Manually extracting this information is labor-intensive, time-consuming, and subject to variability. The problem we are addressing with AI is the automated extraction of specific circumstance information from these unstructured narratives, enabling more comprehensive and systematic data analysis. | Automating the extraction of circumstance information from NVDRS narratives using AI brings several important benefits. First, it significantly reduces the time required to process narrative data. While manual abstraction is resource-intensive and can take hours or days to review thousands of records, AI can accomplish this task in a matter of minutes. This efficiency is especially critical given the scale of the challenge: the NVDRS captures data on over 70,000 violent deaths annually, making manual analysis of detailed free-text information in the CME and LE narratives for each of these incidents impractical. In addition to saving time, AI improves data quality and consistency by applying uniform criteria across all records, which helps to minimize human error and variability. Furthermore, by extracting additional details, such as information about chronic pain or the presence of arguments, AI enhances the surveillance capabilities of public health officials. This richer data enables a better understanding of risk factors and circumstances surrounding violent deaths, which in turn informs more effective prevention strategies. Collectively, these outcomes directly support CDCs mission to strengthen public health surveillance, guide prevention efforts, and ultimately reduce the incidence of violent deaths. | The AI system produces structured data outputs derived from the free-text narratives in the NVDRS. For each narrative, the system identifies and extracts predefined circumstance categories (e.g., presence of chronic pain, evidence of an argument, substance use) and outputs them as structured variables (e.g., binary indicators, extracted text snippets, or coded categories). These outputs can be integrated into existing NVDRS datasets, enabling further quantitative analysis and reporting. | The AI system produces structured data outputs derived from the free-text narratives in the NVDRS. For each narrative, the system identifies and extracts predefined circumstance categories (e.g., presence of chronic pain, evidence of an argument, substance use) and outputs them as structured variables (e.g., binary indicators, extracted text snippets, or coded categories). These outputs can be integrated into existing NVDRS datasets, enabling further quantitative analysis and reporting. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Leveraging GenAI for Efficient Review of CDC Programmatic Reports | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Manual qualitative review of text data within programmatic reports, such as Annual Progress Reports (APRs) submitted by Injury Control Research Centers (ICRCs) and Drug Free Communities (DFCs), is resource and time intensive. Across multiple use cases, generative AI (GenAI) and natural language processing (NLP) are being leveraged to automate the analysis of programmatic data. This approach streamlines the review and evaluation of various programmatic documents, improves efficiency, and supports the assessment of performance, progress, and challenges in funded activities. | By automating the extraction and analysis of critical information from programmatic data, generative AI is expected to significantly reduce the time required for manual coding and review. For example, initial applications have shown that AI can decrease manual review time from an estimated 35 hours to just 8 hours per topic, greatly enhancing efficiency. This time savings enables staff to focus on higher-level evaluations and strategic planning, improving the consistency and accuracy of assessments across multiple program areas. | The output from the AI-based framework consists of automated analyses and summaries of insights and patterns extracted from programmatic reports, such as APRs. The AI system highlights critical barriers, challenges, key themes, and trends identified within the data, providing structured summaries and actionable information. These outputs can be compared with manual qualitative analysis outcomes for validation and further refinement. As the framework evolves, the AI will be expanded to analyze additional sections of programmatic reports, including progress toward goals, program impact, and other relevant metrics, supporting comprehensive evaluation and reporting. | The output from the AI-based framework consists of automated analyses and summaries of insights and patterns extracted from programmatic reports, such as APRs. The AI system highlights critical barriers, challenges, key themes, and trends identified within the data, providing structured summaries and actionable information. These outputs can be compared with manual qualitative analysis outcomes for validation and further refinement. As the framework evolves, the AI will be expanded to analyze additional sections of programmatic reports, including progress toward goals, program impact, and other relevant metrics, supporting comprehensive evaluation and reporting. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Bridging the gap: Leveraging natural language processing to identify reasons for buprenorphine discontinuation in Electronic Health Records | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Life-saving treatment for opioid use disorder (OUD), such as the FDA-approved medication buprenorphine, remains underutilized. Buprenorphine has been shown to reduce illicit opioid use and risk of overdose mortality. Understanding treatment barriers can offer us opportunities for improved recovery. The PanTher Electronic Health Records (EHR) data from OptumLabs are a unique and important data asset, containing structured variables, such as diagnoses and procedures, laboratory measures, and medication records, as well as semi-structured data derived from clinical notes through natural language processing (NLP). The NLP-derived data contain helpful contextual information but have been difficult to use thus far. | The Data Science Upskilling Program advances a key focus of the agencys Data Modernization Initiative, i.e., that CDC's mission is to give all people the information they need for decision-making and wellbeing. Through participation in the Data Science Upskilling Program (DSU), the DOP-DSU team was able to extract actionable insights from EHR, contextualized further by supplementing with NLP-derived data from clinical notes. They developed an algorithm identifying patients with OUD who discontinued buprenorphine and used it to characterize discontinuation reasons using EHR. This has helped provide a fuller understanding of the what and the why surrounding discontinuation of this life-saving treatment, underscoring the need for strategies that improve retention in treatment. The team also built important DOP capacity in working with EHR data and NLP-derived data, including assessing data quality, and linking, processing, analyzing, visualizing and interpreting these data. | Through participation in the Data Science Upskilling Program (DSU), the DOP-DSU team was able to extract actionable insights from EHR, contextualized further by supplementing with NLP-derived data from clinical notes. They developed an algorithm identifying patients with OUD who discontinued buprenorphine and used it to characterize discontinuation reasons using EHR. They were also able to better understand limitations of NLP-derived data from provider notes in EHRs. However, despite the limitations of EHR, findings from this project can complement claims data and surveys from a patient care management perspective, and close the loop in our understanding of patients medication access journey. | Through participation in the Data Science Upskilling Program (DSU), the DOP-DSU team was able to extract actionable insights from EHR, contextualized further by supplementing with NLP-derived data from clinical notes. They developed an algorithm identifying patients with OUD who discontinued buprenorphine and used it to characterize discontinuation reasons using EHR. They were also able to better understand limitations of NLP-derived data from provider notes in EHRs. However, despite the limitations of EHR, findings from this project can complement claims data and surveys from a patient care management perspective, and close the loop in our understanding of patients medication access journey. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Automating Influenza Vaccine Virus Data Processing | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Generative AI | Information for candidate vaccine virus (CVV) data from public-facing websites is highly labor intensive to gather. Previously R scriptstext files with computer programming commandsautomate some scraping processes but still rely on manual methods to extract data from PDFs. | The automation from this AI use case reduced processing time for new data from about one week to one day, enabling faster access and analysis of CVV data to enhance CDCs preparedness for upcoming flu seasons. | The use case automates extracting key phrases, recognizing text, processing forms, and identifying entities to streamline data extraction and supplement missing CVV data from PDFs. This tool is intended for internal use only. | 25/04/2026 | b) Developed in-house | No | The use case automates extracting key phrases, recognizing text, processing forms, and identifying entities to streamline data extraction and supplement missing CVV data from PDFs. This tool is intended for internal use only. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Lineage Assignment by Extended Learning (LABEL) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identifying, classifying, and annotating influenza sequences. | Reliable and accurate clade assignment helps with downstream surveillance reporting and modeling. Accuracy is generally >98% and saves time through automation. | Sequence identifiers and clade annotations, intermediate data used in classification. | 14/01/2026 | b) Developed in-house | Yes | Sequence identifiers and clade annotations, intermediate data used in classification. | No | k) None of the above | Yes | https://github.com/CDCgov/label | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | https://data.cdc.gov/National-Center-for-Immunization-and-Respiratory-D/An-aggregated-dataset-of-serially-collected-influe/cr56-k9wj/about_data ; https://data.cdc.gov/National-Center-for-Immunization-and-Respiratory-D/An-aggregated-dataset-of-day-3-post-inoculation-vi/d9u6-mdu6/about_data | Enhancing influenza A risk assessment rubrics: leveraging predictive correlates and machine learning from in vivo experiments. | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The Pathogenesis Laboratory Team (Immunology and Pathogenesis Branch, Influenza Division, NCIRD) routinely performs influenza A virus (IAV) risk assessment studies in the ferret animal model, to assess IAV pathogenicity and transmissibility in a relevant small mammalian model. However, these studies are typically performed in isolation, with minimal efforts to comprehensively examine how each biological parameter obtained from the work correlates to disease severity and virus transmissibility and how these parameters can be used as a whole to improve risk assessment efforts. We have generated large sets of data collected from 25+ years of performing these in vivo studies, to identify predictive correlates associated with pathogenicity and transmissibility outcomes, and utilized machine learning approaches to better predict the potential public health risk posed by emerging influenza A viruses. | CDCs Influenza Risk Assessment Tool (IRAT) rubric is utilized to assess the pandemic potential of novel or emerging IAV that pose a threat to human health. A better understanding of which key quantifiable metrics of virus behavior in this species are most frequently correlated with virulence or transmissibility would greatly aid CDC leadership who score viruses in this rubric to ensure contributing data from the ferret model is rigorously and accurately contextualized within these risk assessments. As the project relies solely on previously collected in vivo data, it represents a valuable opportunity to support the 3 Rs of animal research (reduction, refinement, and replacement), gathering additional information from 25+ years of research in the ferret model already conducted at CDC, thus highlighting the agencys commitment to responsible and ethical animal research. Numerous peer-reviewed publications have already resulted from this work, including development of predictive models of lethal disease and virus transmissibility, and assessment of which parameters and sample types collected during routine laboratory experimentation offer highest predictive value in these models. These first-in-field analyses also provide an analytic framework and template for subsequent studies with other data collected at CDC. | The machine learning work we perform identifies which variables are more predictive for the associated pathogenesis or transmission outcome, which better informs us of the biology of the influenza-ferret model system for how to interpret the clinical and virological data we collect and better inform pandemic risk assessments. | 23/05/2026 | b) Developed in-house | No | The machine learning work we perform identifies which variables are more predictive for the associated pathogenesis or transmission outcome, which better informs us of the biology of the influenza-ferret model system for how to interpret the clinical and virological data we collect and better inform pandemic risk assessments. | https://data.cdc.gov/National-Center-for-Immunization-and-Respiratory-D/An-aggregated-dataset-of-serially-collected-influe/cr56-k9wj/about_data ; https://data.cdc.gov/National-Center-for-Immunization-and-Respiratory-D/An-aggregated-dataset-of-day-3-post-inoculation-vi/d9u6-mdu6/about_data | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | RepoAnalysis | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | We are trying to generate a summary of code repositories that are insufficiently documented. | This would allow people to search for code and reuse code that is not documented well enough for traditional search engines. | The output from the AI is a summary of what the code repository does based on source codes and/or README. | 25/01/2026 | b) Developed in-house | Yes | The output from the AI is a summary of what the code repository does based on source codes and/or README. | No | k) None of the above | Yes | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Data Standardization with LLM | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | The purpose of this project is to enhance data standardization efforts by leveraging large language models. to improve data cleaning and standardization processes, ultimately enhancing the overall efficiency and accuracy of data | Improve data cleaning and standardization processes within our data systems. ? Enhance the accuracy and reliability of data stored. Streamline workflows by automating certain tasks related to data cleaning and standardization. ? More specifically, the ultimate scope of this project includes integrating into the existing infrastructure of FluLIMS. The data as well as rules . For example, sometimes we have misspelled locations or various ways of referring to the same place (i.e., ATL vs Atlanta) and we would like to standardize that. ? By implementing this proposed project, we anticipate significant improvements in data cleaning and standardization processes within FluLIMS, leading to enhanced efficiency, accuracy, and overall effectiveness in managing flu-related information.? This will most likely lead to up to at least 50% reduction in time and effort to clean data. This can be used for other cleaning other data or other processes that LLM would be useful for. | This would result in a higher quality dataset with increased standardization and increasing usability of the insights for staff. | This would result in a higher quality dataset with increased standardization and increasing usability of the insights for staff. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Using Databricks Genie for Routine Immunization Data Insights | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Generative AI | The volume of immunization data reported quarterly is approximately 5 billion records. Analyzing this data to gather high-level insights requires complex coding and manipulation. By using a Large Language Model (LLM) based approach, the solution offers a plain language query capability that generates code to provide high-level insights without the need to move or create specific views for program needs. This approach is cost-effective, timely, and provides insights that can be used to improve data quality and inform data management planning by programs. | The use of a Large Language Model (LLM) based approach for generating code to analyze immunization data offers several promising benefits for the CDC's immunization program staff and data operation team: 1. Enhanced Efficiency and Time Savings: By enabling plain language queries, this approach significantly reduces the time and effort required for complex data manipulation and coding. This will allow program and data ops staff to focus on more critical tasks, potentially plan the data management tasks efficiently. 2. Improved Data Quality and Management: The insights generated can help identify data quality issues and inform better data management practices. This will lead to potentially more accurate and reliable data. 3. Cost-Effectiveness: Simplifying the analysis process reduces the need for extensive manual labor and specialized coding skills and compute costs. 4. Scalability: Handling approximately 5 billion records quarterly, this approach can scale to meet the demands of large datasets, ensuring timely and comprehensive analysis. | SQL Code, Reports, Visualization and Charts | 25/06/2026 | c) Developed with both contracting and in-house resources | Yes | SQL Code, Reports, Visualization and Charts | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Vaccine Tracking System (VTrckS) Conversational AI Pilot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | The VTrckS Conversational AI module will provide a user-friendly prompt driven approach to gaining insights into the VTrckS datasets. Current approach involves utilizing the SAP HANAs reporting modules and custom dashboards. The conversational AI approach will allow for a more human friendly metto quick insights without having to learn about the underlying data models. | This AI pilot aims to add efficiency for CDC awardees and CDC program staff in gathering insights from the VTrckS order, distribution, shipping, transfers and provider information available in the NCIRDs Advanced Business Intelligence Platform (NABIP). This tool will allow awardees to rapidly gather insights with no coding required both during routine operations and local or national emergency response. Some examples of data insights users gather from the data includes: - Assess active providers Determine if provider is complaint to obtain vaccines through VFC or 317 programs. - Provider Management Validate address and scope of providers offering to public. - Determine providers with specific vaccine availability Determine provider vaccine inventory unique to vaccines distributed. - Assess provider locations against vulnerable populations Quickly respond to routine or emergent request for information related to provider locals with key vaccine inventory. | The tool will output responses to prompts that allow for VTrckS program users and awardees on their data sets. For example: Input: How many providers are in the State of Washington Output: | 25/10/2026 | c) Developed with both contracting and in-house resources | Yes | The tool will output responses to prompts that allow for VTrckS program users and awardees on their data sets. For example: Input: How many providers are in the State of Washington Output: | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Beryllium exposure reconstruction | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Using machine learning to extract information about job title and task and assign relevant exposure codes. | Previously exposure codes were assigned manually by researchers. This method would increase consistency and accuracy of exposure coding and substantially reduce the amount of time needed to assign exposure codes and reduce exposure misclassification. | Exposure codes related to Beryllium exposure. | Exposure codes related to Beryllium exposure. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Site Audit AI Support (SAAIS) App | a) Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | NIOSH conducts numerous post-market activities to ensure that respirator configurations approved by NIOSH remain protective. Among these activities are audits of the manufacturing sites where the approved products are produced. These audits involved detailed evaluations of sites against quality assurance plans approved by NIOSH. Site are audited every few years and, thus, previous reports of nonconformances may be of particular interest to auditors. SAAIS is a prototype application being developed as an AI use case to inform: 1. CDC Office of the Chief Information Officer (OCIO)s implementation strategy for cloud-based enterprise services standalone AI tools and AI-enhanced systems 2. The specific enterprise approach, AI tools, and training techniques to be used by Respirator Approval System (RAS) developers when AI enhancements are eventually added following the completion of the base system within the Power Platform enterprise system. | Reduced time and errors and greater consistency related to audits of manufacturing sites used to produce NIOSH Approved respirators. | Four versions of the NPPTL App were developed, each introducing incremental features to enhance its functionality. Details are available upon request. Capabilities currently include: Drag-and-drop file uploads Multiple file uploads Clear navigation and guidance Download options for AI outputs Connection of evidence to CAR items Classification of non-conformances Support for Excel and email file uploads Enhanced feedback mechanisms and reporting features. | Four versions of the NPPTL App were developed, each introducing incremental features to enhance its functionality. Details are available upon request. Capabilities currently include: Drag-and-drop file uploads Multiple file uploads Clear navigation and guidance Download options for AI outputs Connection of evidence to CAR items Classification of non-conformances Support for Excel and email file uploads Enhanced feedback mechanisms and reporting features. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Respirator Selection Logic (RSL) Copilot | a) Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Workers rely on NIOSH Approved respirators to protect them from inhaling high-consequence particulate, gas, and vapor hazards. Some examples of these respiratory hazards include: Wildfire, structural, or surgical smoke Mold during post-flood remediation efforts Infectious diseases such as tuberculosis Chemicals used to clean or disinfect Particles liberated when cutting rock in industries such as construction and mining Selecting the correct respirator to protect workers requires knowledge of the hazard or hazards present, the job task, and the environment. NIOSHs Respirator Selection Logic (RSL) is a state-of-the-art tool designed to guide the selection of appropriate respiratory protection devices based on specific workplace hazards and conditions. The RSL requires users to enter detailed, task- and environment-specific information at multiple decision points to execute its logic correctly. | No current AI tool operationalizes the RSL while addressing the challenge of gathering and validating highly specific input information required at each decision point. The absence of such assistance leads to errors in respirator selection that can cause hazardous exposures, regulatory violations, and adverse health outcomes. Developing Ally to close this gap will improve the effectiveness of respiratory protection programs by ensuring that users supply accurate, relevant data to the RSL, thereby enhancing the quality and traceability of respiratory protection decisions aligned with established federal guidance. | Upon completion of this project, users of the RSL will be able to: Receive real-time guidance on what information is required for respirator selection and why it matters. Provide input in natural language rather than navigating technical documents or forms manually. Understand and apply the RSL more effectively, leading to fewer errors in respirator selection, improved compliance, and stronger respiratory protection outcomes. Use Ally as a decision support toolnot a decision makerto identify and clarify required inputs, understand the rationale behind each RSL step, and access authoritative guidance. The Copilot will always keep the user in the loop, helping them apply judgment while ensuring traceability to official sources like NIOSH and OSHA. This project will demonstrate how AI can support complex public health decision tools like the RSL while maintaining user accountability, transparency, and regulatory defensibility. | Upon completion of this project, users of the RSL will be able to: Receive real-time guidance on what information is required for respirator selection and why it matters. Provide input in natural language rather than navigating technical documents or forms manually. Understand and apply the RSL more effectively, leading to fewer errors in respirator selection, improved compliance, and stronger respiratory protection outcomes. Use Ally as a decision support toolnot a decision makerto identify and clarify required inputs, understand the rationale behind each RSL step, and access authoritative guidance. The Copilot will always keep the user in the loop, helping them apply judgment while ensuring traceability to official sources like NIOSH and OSHA. This project will demonstrate how AI can support complex public health decision tools like the RSL while maintaining user accountability, transparency, and regulatory defensibility. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | PPE Concerns Copilot | a) Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The PPE Concerns Mailbox received inquiries that NIOSH/NPPTL staff must review and respond to. To date, over 10,000 questions have been received with answers provided. Responding to an inquiry requires a multi-step process that staff must complete manually, including Reading the inquiry Logging the inquiry and information about the inquiry and inquirer in an Excel spreadsheet Searching through the spreadsheet of past questions Reviewing prior responses for relevance Drafting a reply based on similar previous responses or researching and composing a new answer from scratch if no match is found Sending response to additional staff if further subject matter expert review is needed Obtaining Executive Leadership review and approval, if needed Updating the spreadsheet with the finalized response, reply date, and staff who assisted with the response This process can be time-consuming, repetitive, and inconsistent, especially when multiple team members are handling and categorizing inquiries or when there are high volumes. | The Copilot could handle the initial stepsreviewing the question, searching past responses, and drafting a replywhich currently take the most time. For straightforward or simple, repeat questions, this could reduce staff time from 3060 minutes to under 10 minutes, with staff only needing to review and finalize the AI-generated draft. Even for more complex inquiries, having a well-structured starting point and a searchable interface would significantly cut down manual effort and improve turnaround time across the board. Additionally, the Copilot would improve time savings when a new staff member is assigned to managing the mailbox due to events such as staffing changes. The Copilot would allow a more seamless transition of the mailbox, whereas the current process requires months of on-the-job training to effectively navigate the spreadsheet and learn the proper standard responses to use for specific inquiries. Additionally, the Copilot would remove the burden of relying on memory recall to determine where a previous response is in the spreadsheet when staff receive specific, repeat questions. | The Copilot will analyze incoming questions, search the existing dataset for relevant responses, and generate draft replies for human review. After staff revise and approve the response, the Copilot will assist by sending the finalized email to the submitter and updating the spreadsheet with the new question-and-answer (Q&A) pair and supporting information. | The Copilot will analyze incoming questions, search the existing dataset for relevant responses, and generate draft replies for human review. After staff revise and approve the response, the Copilot will assist by sending the finalized email to the submitter and updating the spreadsheet with the new question-and-answer (Q&A) pair and supporting information. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Assessing public comment responses to draft NIOSH wildland fire smoke document | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Efficiently compile and synthesize the public comments received for a draft NIOSH document. | Make the NIOSH response to public comments more efficient and effective and reduce the time needed to review public comments. | Public comment responses compiled in various formats to make the response process more efficient. | Public comment responses compiled in various formats to make the response process more efficient. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Mining.AI | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Mining.AI is an innovative initiative led by a small team at NIOSH tasked to develop a domain-specific generative LLM AI platform focused on improving health and safety in the mining industry. The project consolidates NIOSH's latest peer-reviewed research and safety knowledge to support decision-making, hazard prevention, and incident response. | Accelerates Access to Critical Safety Knowledge Mining.AI enables instant retrieval of NIOSH mining research, technical reports, and best practices, replacing slow manual searches through multiple archives. Supports Faster, Safer Decision-Making Frontline professionals, engineers, and safety managers can query the AI to get tailored responses grounded in NIOSH-validated scientific evidence. Preserves Expertise Encodes decades of institutional knowledge into a reusable, interactive platform, mitigating the impact of staff turnover and retirements. Promotes Research Translation Converts dense, technical research into actionable language accessible to a wider range of users, including mine operators and workers. | A NIOSH specific internal chatbot tool that researchers can interact with to assist in the digestion of all previous NIOSH published articles. | A NIOSH specific internal chatbot tool that researchers can interact with to assist in the digestion of all previous NIOSH published articles. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | CSB MCP AI | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | Reduce required manhours and improve accuracy of reports on CSB infrastructure and by assisting infrastructure administrators with tasks and duties. | Expected to generate reports for and answer questions posed by senior staff about CSB infrastructure, improving accuracy and freeing infrastructure administrators from these tasks. Assist infrastructure administrators with tasks and duties, improving accuracy and completion time. | Reports, infrastructure code generation, and task execution. | Reports, infrastructure code generation, and task execution. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Support for MCP and AI APIs in Digital Gateway | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | Digital Gateway (CDC's API Platform) can support MCP and AI APIs. Given MCPs are a new addition to the ecosystem, the goal is to explore what it takes to implement MCPs internally. | Centralized Management: The Digital Gateway can act as a centralized point for managing interactions between AI agents and MCP servers. This simplifies the integration process and ensures consistent policy enforcement across all tools and data sources. Enhanced Security: By routing all requests through the Digital Gateway, the CDC can enforce security policies, access controls, and rate limiting. This helps protect sensitive health data and ensures compliance with regulatory requirements. Improved Data Governance: The Gateway can provide visibility into data usage and interactions, helping the CDC maintain robust data governance practices. This includes monitoring access, usage patterns, etc. | The output will be a test Model Context protocol. This will be an internal server which will have access to public only data to support future development of the related Digital Gateway infrastructure for future potential MCPs and APIs. | The output will be a test Model Context protocol. This will be an internal server which will have access to public only data to support future development of the related Digital Gateway infrastructure for future potential MCPs and APIs. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Delegations Repository Assistant | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The Delegations Library, which houses critical guidance on authorities and approvals, has been selected as the pilot use case due to its centralized structure and high relevance. Staff frequently report difficulty locating the right documents, resulting in delays and inefficiencies. This project will use CDCs internal EDAV platform to develop a chatbot assistant that helps users retrieve existing content more easily. The proof of concept will test technical feasibility, user experience, and potential for broader application to support CDCs operational mission. | Staff currently spend considerable time searching for this type of guidance, which slows administrative actions and diverts focus from core public health work. By streamlining access to internal policies and procedures, this proof of concept supports greater operational efficiency. While the chatbot does not directly impact health outcomes, it enables staff to redirect time toward critical public health priorities and lays the groundwork for applying similar tools across other business functions that support CDCs mission. | Staff will receive responses with information from existing delegation-related documents within the internal Delegations Library. The responses will include citations and references to the delegations library documents. | Staff will receive responses with information from existing delegation-related documents within the internal Delegations Library. The responses will include citations and references to the delegations library documents. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | OFR Robotics and Process Automation (ORPA) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Agentic AI | Reduction in human workload and increase in quality through automation of repetitive tasks. | Multiple Robotics and Process Automation (RPA) bots are in use. Most are simple automations of basic tasks (conversation of documents/file management). The two most significant bots focus on automating the sorting and distribution of thousands of invoices annually (Invoice Mailbox Management) and conducting government purchase card use validations monthly (PCARD Line Item Review). The result of the two bots are greater consistency/quality in the outputs and a reduction in human workload. The Invoice Mailbox Management bot processes >15,000 emails annually and the PCARD Line Item Review bot processes 3,000 5,000 transactions monthly against more than 200 unique business rules. | The Invoice Mailbox Management bot creates PDF files with the email and attached invoices/documentation into consolidated files for further processing by staff for payment into UFMS. The PCARD Line Item Review bot generates a list of transactions (from the full list of CitiBank credit card transactions) that have met specific business rules and may be a potential policy violation or need additional attention. The output also includes policy references and notes (specific to the transaction) to aid the reviewer in determining the next course of action for each item. | 24/05/2026 | c) Developed with both contracting and in-house resources | UIPath | Yes | The Invoice Mailbox Management bot creates PDF files with the email and attached invoices/documentation into consolidated files for further processing by staff for payment into UFMS. The PCARD Line Item Review bot generates a list of transactions (from the full list of CitiBank credit card transactions) that have met specific business rules and may be a potential policy violation or need additional attention. The output also includes policy references and notes (specific to the transaction) to aid the reviewer in determining the next course of action for each item. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | FERRET | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | High throughput processing of unstructured data necessary for mandated reporting. | Usage of programs to automate identification, extraction, and re-structuring of data will significantly decrease human involvement and processing time. Once deployed the time savings is anticipated to be on the scale of weeks to months. | Structured data deposited into a SQL database. | Structured data deposited into a SQL database. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Deep Research for Public Health | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Agentic AI | Public health agencies like the CDC face challenges in efficiently processing large volumes of complex information, conducting evidence-based research, and producing timely, high-quality analyses and reports to inform decision-making. Traditional workflows for tasks such as literature review, data analysis, policy evaluation, and communications are often time-consuming and resource-intensive. The problem addressed by integrating agentic AI models, such as OpenAIs Deep Research, is to enhance the efficiency, productivity, and rigor of these core public health functions by automating information retrieval, synthesis, and analysis, thereby enabling faster, more informed, and scalable decision-making while maintaining quality and transparency. | Empirical evidence from the report shows that the AI compressed tasks that would normally take days or months, into a single automated workflow, with 92% of subject matter experts reporting substantial productivity gains. Quantitative analysis from an internal study found that 94% of prompts resulted in successful, high-quality reports (median rating very good), with most completed in under 30 minutes. The AI demonstrated strong effectiveness in information retrieval, data analysis, and strategic planning across a wide range of public health domains, enabling faster, more informed decision-making and allowing CDC staff to focus on higher-level work that benefits public health outcomes. | The output from the AI system consists of detailed, report-style responses tailored to specific public health tasks and prompts. These reports typically include synthesized information from online sources, data analysis, summaries of scientific evidence, policy or legal analysis, and clear recommendations or findings, often with citations. The reports are structured, well-organized, and written in clear language, making them easy for CDC staff and subject matter experts to review and use. According to the evaluation, the outputs scored highly for clarity and reasoning transparency. | 25/04/2026 | a) Purchased from a vendor | OpenAI | Yes | The output from the AI system consists of detailed, report-style responses tailored to specific public health tasks and prompts. These reports typically include synthesized information from online sources, data analysis, summaries of scientific evidence, policy or legal analysis, and clear recommendations or findings, often with citations. The reports are structured, well-organized, and written in clear language, making them easy for CDC staff and subject matter experts to review and use. According to the evaluation, the outputs scored highly for clarity and reasoning transparency. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Federal Select Agents Program (FSAP) Customer Agent | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | FSAP users need support on the use of the FSAP system. Completing forms, managing entities, agents, and toxins. Updating permits, correcting data, conducting inspections, and related activities. The FSAP Customer Agent will provide answers to questions as users navigate the FSAP process. | Support will be provided to users without additional staffing requirements. Users will get fast and accurate solutions to questions instantly. | The output is text via an LLM which is trained on operations and management data. Output is via a both a chat window or copilot for internal use. | 25/03/2026 | b) Developed in-house | Yes | The output is text via an LLM which is trained on operations and management data. Output is via a both a chat window or copilot for internal use. | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Publication Portfolio Analytics | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The Data Strategy and Analytics Team (DSAT) within OS employs natural language processing (NLP) topic modeling techniques to help programs identify common themes within their publication data. By combining these efforts with bibliometric analysis, we standardize the reporting of media attention, as well as policy and academic citations, by theme and/or CDC organization. Automating these efforts helps the CDC library to optimize their allocation of resources and avoid duplication of effort. | OS DSAT utilizes NLP to generate organization-specific reports and maintain an agency-wide dashboard. In the case of the agency-wide publication impact dashboard, NLP topic modeling was used to identify common publishing themes for the agency using 10+ years of CDC publication data. This allows users to see trends in publishing topics have changed over time and, when connected to media attention and citation data, allows communication teams, leadership, and scientists interested in assessing the impact of their programs publications. This dashboard has proven to be impactful, with 105 unique CDC staff across several CIOs and divisions using the dashboard between 7/1/2025 and 8/1/2025. | The outputs of the OS DSAT Publication Portfolio Analysis work include a pipeline/ PowerBI dashboard workflow and several center/divisional organizational specific reports and presentations. | 23/06/2026 | b) Developed in-house | Yes | The outputs of the OS DSAT Publication Portfolio Analysis work include a pipeline/ PowerBI dashboard workflow and several center/divisional organizational specific reports and presentations. | No | k) None of the above | Yes | Sample code for publication portfolio analytic activities can be found here: https://github.com/cdcai/analysis-bertopic-cdc-publications/blob/main/Topic%20Modeling/2024_CDC_Topic_Model_Code_Workbook.ipynb | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | https://stacks.cdc.gov/ | CDC Vault and Stacks Metadata Extraction | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Agentic AI | We are attempting to speed up the process of generating a digital metadata record for objects that will be curated and stored into either CDC Public Access Platform (Stacks) or CDC Vault. These two systems are built using the same software stack but one is for public data and the other is for non-public data. To create a metadata record solely with a human, the process takes about an hour per document. We are looking to improve the process to use AI to prepare the metadata record and reduce the human time to under 5 minutes. A secondary objective is to have a non-human process for the non-public data that will go into CDC Vault. | There are two primary paths and uses for the AI assisted pre-processing. The first is to improve the speed and effectiveness of human catalogers/librarians. Long term, we need to be able to process more data and require AI to improve this process so that humans are only working on critical steps and validation of the AI. This process is going from 60 minutes per document to <5 minutes per document. The second is to process federal records prior to a record being entered into CDC Vault and copied to NARA. This process will not have a human review as the final disposition is not public but we need to process a large number of files (100s of thousands to millions). This is simply not realistic to do via humans so this is a novel opportunity. | The AI will return up to 41 metadata elements (eg Title, Author, Subject, Description, Funding Source, Geographical Local). | 25/04/2026 | c) Developed with both contracting and in-house resources | Yes | The AI will return up to 41 metadata elements (eg Title, Author, Subject, Description, Funding Source, Geographical Local). | https://stacks.cdc.gov/ | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/CDC | AIP Assist | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | AIP assist supports user engagement with the Palantir Platform. It is an LLM-powered support tool that is able to answer inquiries about the platform and can also guide users in developing their own applications by supporting users to write their own code or pointing the user to the right tool for the task. It supports assistance for platform navigation, coding tasks, documentation support, and problem solving for users. | AIP Assist benefits the agency and general public by enabling users to quickly understand the platform which allows them to quickly ramp up the development of new applications. It supports assistance for platform navigation, coding tasks, documentation support, and problem solving for users when working on applications in many roles including data science, data engineering, machine learning, and AI. | AIP Assist is an LLM-powered tool available to all 1CDP. It provides accessibility to the user in relation to the capabilities of the platform by using generative AI and internal documentation. This tool works as an assistant to help users to understand the platform and helps them to quickly iterate over the development of new applications. The assistance provided by AIP Assist is text generated information to the user intended to be used for platform navigation, coding tasks, documentation support and problem solving. AIP assist provides generated text on the aforementioned aspects | 24/10/2026 | a) Purchased from a vendor | Palantir Technologies | Yes | AIP Assist is an LLM-powered tool available to all 1CDP. It provides accessibility to the user in relation to the capabilities of the platform by using generative AI and internal documentation. This tool works as an assistant to help users to understand the platform and helps them to quickly iterate over the development of new applications. The assistance provided by AIP Assist is text generated information to the user intended to be used for platform navigation, coding tasks, documentation support and problem solving. AIP assist provides generated text on the aforementioned aspects | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CDC | Center for Forecast and Analytics (CFA) Model Studio | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Other | This tool is intended to provide streamlined infrastructure around allowing users to bring their own models or discover existing ones and evaluate them in a streamlined way. For example, users can develop their models in their local machine, containerize them and deploy them using CFA Model Studio in the platform for further experimentation, parameter exploration and registry. | This tool is intended to extend the Natice Modeling Objectives capabilities on platform to reduce the effort and burden around bringing models on platform and especially allow for R based models for fine evaluation. Additionally, adds flexibility for users to utilize their preferred modelling language and tools | Model Library is a tool where users can go to discover, upload, and test their various models. The outputs are typically test runs, and the ability to evaluate model performance | 25/01/2026 | c) Developed with both contracting and in-house resources | Palantir Technologies | Yes | Model Library is a tool where users can go to discover, upload, and test their various models. The outputs are typically test runs, and the ability to evaluate model performance | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CM | AI-assisted comment triaging tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | To reduce labor hours of manual comment review we use AI to assist in comment review and triage, and in identifying form letters. | Cost savings and time savings to the government. | Compiles public comments by topic in the rule. | 22/06/2026 | c) Developed with both contracting and in-house resources | L&M Policy Research, LLC | No | Compiles public comments by topic in the rule. | Contractor uses prior year public comments to train the tool for the upcoming comment period. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Enhanced Direct Enrollment Outlier Detection | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | The Enhanced Direct Enrollment Outlier Detection use-case is not considered high-impact as it does not serve as the basis for decisions or actions that affect civil rights, liberties, privacy, critical resources, safety, or strategic assets. The AI models within this use-case use machine learning techniques to detect anomalous or inconsistent patterns relative to standard partner actions and other application channels outside of the EDE pathway. The findings are shared with the agency divisions in the form of different reports and tables. The data alone is not enough to determine fraud, however, can be used by CMS in tandem with other data to determine if CMS should take any corrective actions. CMS makes all determinations on actions. | Classical/Predictive Machine Learning | (Marketplace) Enhanced Direct Enrollment (EDE) allows consumers to apply for and enroll in an exchange plan directly through an approved partner's UI, without being redirected through the Healthcare.gov application. These partner systems directly interface with the APIs developed by the FFE. As EDE partners gain more control over their application process, the FFE must ensure program integrity. | Implement ML to identify anomalies/quality issues with partner-submitted person, application, and policy data?. | Ensure FFE EDE program integrity | Ensure FFE EDE program integrity | |||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCSQ | Feedback Analysis Solution (FAS) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Speeds up the manual and time-consuming process of analyzing public comments on frequently posted regulations.gov and FDMS dockets by employing advanced natural language processing and machine learning technologies. | Speeds up the manual and time-consuming process of analyzing public comments on frequently posted regulations.gov and FDMS dockets by employing advanced natural language processing and machine learning technologies. FAS help categorize stakeholder comments or feedback (collected in multiple venues), thereby enabling analysts to use the system to quickly identify comments that may impact program/policy decisions. | FAS categorize stakeholder feedback (collected in multiple venues), thereby enabling CMS analysts to use the system to quickly identify comments that may impact program/policy decisions. The system utilizes Artificial Intelligence (AI) to minimize bias through topic, theme, stakeholders, and sentiment models that standardize the analysis process, and provide insights that were previously difficult to obtain manually. | 21/09/2026 | b) Developed in-house | Yes | FAS categorize stakeholder feedback (collected in multiple venues), thereby enabling CMS analysts to use the system to quickly identify comments that may impact program/policy decisions. The system utilizes Artificial Intelligence (AI) to minimize bias through topic, theme, stakeholders, and sentiment models that standardize the analysis process, and provide insights that were previously difficult to obtain manually. | Use API to comments from Regulations.gov, FDMS, Federal Register. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | IT System Utilization Optimization | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Marketplace) The FFE application usage patterns vary and are dependent on differing environment usage periods. CMS resources are currently manually scaled, not allowing immediate actions in correlation with usage changes. | Implement ML to determine optimized infrastructure/application scaling to support system volume. | Automation of application scaling | Automation of application scaling | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Risk Adjustment Outlier Analysis | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The risk adjustment (RA) program spreads the financial risk borne by Issuers due to offering a variety of plans meeting the need of the diverse population. RA payments are distributed based on population risk levels. CCIIO uses a distributed data solution (EDGE servers) to calculate plan average actuarial risk and associated RA transfers and must avoid potential program integrity risks to annual calculations. | Implement ML to identify outliers in issuer data that may unduly influence risk adjustment transfers. | Maintain RA program integrity | Maintain RA program integrity | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Agent/Broker Fraud Analysis (ABCQI) | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | The Agent/Broker Fraud use-case is not considered high-impact as it does not serve as the basis for decisions or actions that affect civil rights, liberties, privacy, critical resources, safety, or strategic assets. The AI models within this use-case use machine learning techniques to detect patterns that are inconsistent or anomalous relative to standard consumer, agent broker, and partner actions. Our findings are shared with the agency divisions in the form of reports and tables and can be used in combination with additional information derived outside of this AI tool to determine if CMS should take any corrective actions. All outcomes are internal facing. CMS makes all determinations on actions. | Classical/Predictive Machine Learning | (Marketplace) Agents and brokers support the consumer enrollment and eligibility process. Because of this, they have learned the intricate details of the Federally-facilitated Exchange (FFE) for accessing applications, submitting eligibility determinations, and adding enrollments to their line of business, opening up the possibility of fraud. | Implement Machine Learning (ML) to identify potential fraud/waste/abuse within Agent/Broker data. | Reduce waste, fraud, and abuse | Reduce waste, fraud, and abuse | |||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCSQ | CCSQ ServiceNow AI Search | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI Search index stores data from ServiceNow AI Platform® records or external sources and makes that data available for users to search in multiple applications. Search query features use ServiceNow AI Platform technologies to improve the search user experience. | CCSQ ServiceNow AI Search is a product inside ServiceNow (SaaS). It replaces traditional Zing search tool (exact search), and enables users with more flexible searches to get relevant and actionable answers quickly. * Improve search relevance * Promote self-service by empowering users to find information independently, and potential reduce number of cases | AI Search will * Display most relevant results first * Support synonyms, auto-corrections, stop words, and auto-completion AI Search analytics will provide insights into search usage, performance, trends, metrics, and how to improve search experiences. Plan to do AI Search tune-up next | 24/04/2026 | a) Purchased from a vendor | Yes | AI Search will * Display most relevant results first * Support synonyms, auto-corrections, stop words, and auto-completion AI Search analytics will provide insights into search usage, performance, trends, metrics, and how to improve search experiences. Plan to do AI Search tune-up next | CCSQ ServiceNow Database | Yes | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OPOLE | Complaint Analysis | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Conduct high volume case analysis to identify root causes/trends that can be validated by SMEs and act as a workforce multiplier. | Reduce repeat issues delaying benefits or access to care, and improve health plan compliance with federal rules and regulations. | Identify trends, applicable regulatory citations, sample for validation, and recommended next steps. | 24/02/2026 | b) Developed in-house | Yes | Identify trends, applicable regulatory citations, sample for validation, and recommended next steps. | data extracts from HPMS/CTM, and manually validated results based on standard criteria | Yes | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Help Desk Responses | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | (Marketplace) The Division of Issuer Management and Operations has talked with their contractor, LMI, about using an AI tool for the Help Desk contract. | Generate responses to common questions from issuers and external organizations based on previously cleared material. | Reduce the amount of time staff contractors need to generate answers for SME review and approve responses to issuers and other external entities that ask questions to the help desk. | Reduce the amount of time staff contractors need to generate answers for SME review and approve responses to issuers and other external entities that ask questions to the help desk. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCSQ | Resource Library Assistant | a) Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Users being able to find information related to program participation. | Improving access to program information and reducing service desk burden and improving user experience for searches. | The AI pulls responses from preapproved documents that were fed into the system. It is not a learning model at this time. | The AI pulls responses from preapproved documents that were fed into the system. It is not a learning model at this time. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Chatbot within Hub | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Other) Currently the following help support activities are handle by a human: - Personalized FAQ?s - Clarify questions with schema and onboarding, etc. - Handle "Where is 'My file' or 'My Request'?" inquiries - Provide a "Talk to Agent" feature - Schedule a testing window - Provide data summarization and reporting for internal stakeholders - Report operational health of the system - Include training materials, Q/A about the system | Build a chatbot that will address some help support activities through: 1. Access to Internal Knowledge base Including retrieval augmented generation (RAG) 2. Personalized FAQs & contextual generation 3. Interaction with Live Agent 4. Chat & Talk Feature 5. Ability to query custom data source for File or Case status | Help support team in day-to-day communication with external partners | Help support team in day-to-day communication with external partners | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OPOLE | Medicare Part C/D Marketing Material Review | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Expand the volume of materials reviewed while providing consistent insights into trends and issues with materials received. | reduce cycle time and increase volume reviewed | guided recommendations | guided recommendations | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCSQ | QPP Admin Bot | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This AI is to help with Code complexity. | In general we expect to see a decrease to lines of code, increase efficiency of code, increase developer output and reduced story points for work which in turn produces cost savings. | Recommendation. | Recommendation. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OAGM | CMS Labor Analysis Wizard (CLAW) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | CLAW provides semi-automated analysis and insights into contractor-proposed labor for evaluation and negotiation purposes, enabling OAGM to make more informed procurement decisions that optimize contract terms and enhance the efficiency of the federal acquisition process. | CLAW enables OAGM to make more informed procurement decisions that optimize contract terms and enhance the efficiency of the federal acquisition process. | CLAW outputs normalize labor category classifications, historical price trend analysis, and comparative insights on contractor-proposed labor rates to support OAGM's contract evaluation and negotiation processes. | 25/05/2026 | c) Developed with both contracting and in-house resources | Skyward Solutions | Yes | CLAW outputs normalize labor category classifications, historical price trend analysis, and comparative insights on contractor-proposed labor rates to support OAGM's contract evaluation and negotiation processes. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCSQ | CCSQ Now Assist for CSM | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | CCSQ Now Assist for CSM (Customer Service Management) is a product inside ServiceNow (SaaS). It integrates Gen AI with CSM, and uses Now LLM (ServiceNow native Large Language Model) to generate contents based on machine learning (ML). Now Assist for CSM helps agents to improve productivity and efficiency and deliver better services. Improve agents' responsiveness and productivity * Quickly get familiar with a case/chat by getting case/chat summarization * Quickly resolve the case by using auto-generated resolution notes. | CCSQ Now Assist for CSM (Customer Service Management) is a product inside ServiceNow (SaaS). It integrates Gen AI with CSM, and uses Now LLM (ServiceNow native Large Language Model) to generate contents based on machine learning (ML). Now Assist for CSM helps agents to improve productivity and efficiency and deliver better services. Improve agents' responsiveness and productivity * Quickly get familiar with a case/chat by getting case/chat summarization * Quickly resolve the case by using auto-generated resolution notes. | Agents have quick access to - Case summarization - Chat and agent hand-off summarization - Resolution Notes Generation | 24/12/2026 | a) Purchased from a vendor | Yes | Agents have quick access to - Case summarization - Chat and agent hand-off summarization - Resolution Notes Generation | Historical CCSQ ServiceNow case data | Yes | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | AI Workspace | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Hosted development env with access to AI tooling, including LLMs for use in early stage exploration. | AI Workspace provides a hosted development environment with access to AI tooling including LLMs, enabling CMS teams to conduct early-stage AI exploration and experimentation that can lead to innovative solutions | AI Workspace provides a hosted development environment that enables end users to create code, with the organizational impact being rapid exploration of AI ideas to validate technical feasibility, viability, and desirability of potential solutions. | 25/04/2026 | c) Developed with both contracting and in-house resources | Skyward Solutions | Yes | AI Workspace provides a hosted development environment that enables end users to create code, with the organizational impact being rapid exploration of AI ideas to validate technical feasibility, viability, and desirability of potential solutions. | Yes | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCSQ | Citation Analysis and Survey Assistant (CASA - Nursing Home Survey CMS 2567) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | CASA enhances the efficiency and effectiveness of monitoring and reviewing nursing home surveys across the US. It enhances how the Quality, Safety, and Oversight Group (QSOG) and Survey Operations Group (SOG) assess how State Survey Agencies (SSAs) are citing nursing home deficiencies, reported on CMS Form 2567, by employing advanced natural language processing and machine learning technologies. | Speeds up the manual and time-consuming process of survey review and citing nursing home deficiencies, reported on CMS Form 2567, by employing advanced natural language processing and machine learning technologies. | An application which provides Nursing Home Oversight groups an interface to view, track and utilize all the features supported by aforementioned ML/AI powered processes. | 24/11/2026 | b) Developed in-house | Yes | An application which provides Nursing Home Oversight groups an interface to view, track and utilize all the features supported by aforementioned ML/AI powered processes. | It uses data from Nursing Home Care Compare website. A model/process that uses an LLM and few-shot learning to identify Extent and Sample from deficiency text. Accuracy metrics generated by comparing development outcome with labeled data collected from Labeling jobs. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Independent Dispute Resolution (IDR) Eligibility Rules Engine | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The current Independent Dispute Resolution (IDR) Technical Assistance (TA) process is very manual and time intensive which limits throughput. The rules engine should significantly expedite processing and expand capacity. Automating more of the process may also increase consistency across recommendations and result in a more predictable timeframe for the workflow by better positioning disputes for analyst review. | The AI tool will help automate the eligibility review process, reducing time-intensive manual steps and increasing consistency of results. | Use of artificial intelligence (AI) models to identify the presence or absence of necessary data points within documentation. AI tool searches documentation to identify and store necessary data points such as document title, file type, payment date, service code, claim number, and date of service. | 25/03/2026 | a) Purchased from a vendor | Yes | Use of artificial intelligence (AI) models to identify the presence or absence of necessary data points within documentation. AI tool searches documentation to identify and store necessary data points such as document title, file type, payment date, service code, claim number, and date of service. | Federal Independent Dispute Resolution (IDR) Dispute Data | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Review Regulatory Comments | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Marketplace) Public comment analysis in the Letter to Issuers (LTI) and Notice of Benefits and Payment Parameters (NBPP) to streamline review of comments. | Identify trends in responses from commenters in policy and operational issues proposed by CMS. | Reduce the amount of time and resources CCIIO and contracting staff would need to review the LTI and NBPP by an estimated 25%. | Reduce the amount of time and resources CCIIO and contracting staff would need to review the LTI and NBPP by an estimated 25%. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Improved Data Quality Checks | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Marketplace) Improve consumer experience by providing an asynchronous (not at the same time) solution to detect and provide near real time feedback via outreach to the consumer, shortening the overall return cycle time without requiring UI changes. | Develop a POC classifier model to identify incorrect document upload types / low-quality images through use of optical character recognition (OCR). | Improvement of user experience | Improvement of user experience | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CMCS | Performance Metrics Database and Analytics (PMDA) | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCSQ | CCSQ Now Assist for Creator | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | Improve efficiencies regarding development withing the CCSQ ServiceNow program. | Improve time to deliver customer value through improved efficiencies regarding development withing the CCSQ ServiceNow program. | Text to Code. Text to Flow. Flow Assist | 24/10/2026 | a) Purchased from a vendor | Yes | Text to Code. Text to Flow. Flow Assist | Peer reviews | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CM | Docketscope Public Comment Processing | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Docketscope is the platform used to triage public comments submitted on the OPPS/ASC and ESRD PPS proposed rules. | Allows division staff to compile, organize, and process thousands of comments that inform final rulemaking. This AI functionality may contribute to greater efficiencies in reviewing public comments by automating the process of identifying identical comments and allowing for quicker processing of comments with similar themes. | The AI functionalities used in Docketscope include a clustering feature that automatically groups similar public comments together using text analysis and heuristic techniques, an issue mapping functionality that depends on machine learning to render HTML document versions, and a bulk processing comment feature which uses a rules-based system and logic programming to identify public comments relevant to a specific topic. | 23/04/2026 | a) Purchased from a vendor | No | The AI functionalities used in Docketscope include a clustering feature that automatically groups similar public comments together using text analysis and heuristic techniques, an issue mapping functionality that depends on machine learning to render HTML document versions, and a bulk processing comment feature which uses a rules-based system and logic programming to identify public comments relevant to a specific topic. | Public comments from regulations.gov | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | iVeri-Fi (Test) | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | In October 2024, Serco will begin utilizing iVeri-Fi, (a decision service platform) to perform automated processing of remote identity proofing (RIDP) verification tasks. These tools are already in our Eligibility Workers Support System (EWSS) stack and have an existing Authority to Operate (ATO). We are not introducing any new technologies - we are just changing how and where the work is done through automation (before Task Inconsistency Processing System (TIPS) not integrated with TIPS). | A decision service (Sapiens) will make the adjudication decision using Remote Identity Proofing (RIDP) business rules. This service integrates with Event-Based Processing (EBP) microservices, Sapiens Decision, and Rosette Name Indexer (RNI) for matching identity data. Sapiens Decision uses AI to ensure consumer and RIDP data match and will incorporate more machine learning in the future. | Significant reductions in operational costs, increased efficiency in task processing, improved quality and consistency in decision-making, and enhanced user experience for eligibility support workers by reducing their manual workload. The system also aims to facilitate easier updates and modifications, supporting ongoing improvements and expansions of automation. | Significant reductions in operational costs, increased efficiency in task processing, improved quality and consistency in decision-making, and enhanced user experience for eligibility support workers by reducing their manual workload. The system also aims to facilitate easier updates and modifications, supporting ongoing improvements and expansions of automation. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OHI | AI-Powered Meeting Notes for MAG Hearings and AIRC Sessions | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to address process redundancy by streamlining repetitive tasks, reducing manual effort, and minimizing duplication across workflows. This allows teams to operate more efficiently and focus on higher-value activities. | The goal is to reduce manual note-taking, improve accuracy, and ensure timely documentation for case management and compliance. | The outputs will include automated workflows, consolidated reports, and task completion logs that eliminate repetitive manual steps and streamline operational processes. | The outputs will include automated workflows, consolidated reports, and task completion logs that eliminate repetitive manual steps and streamline operational processes. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OHI | AI-Generated Pre-Briefs for MAG Hearings and AIRC Sessions | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Utilize an AI system to compile structured pre-briefs for MAG Hearings and AIRC sessions. | The goal is to provide Hearing Officers with a concise, data rich overview of the appeal case, enabling more focused and efficient sessions. | A standardized, data-rich case summary for each appeal, including key facts, timelines, relevant documentation, and decision history, presented in a concise format optimized for Hearing Officer review. | A standardized, data-rich case summary for each appeal, including key facts, timelines, relevant documentation, and decision history, presented in a concise format optimized for Hearing Officer review. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OHI | Knowledge Management Solution for Appeal Case Workers | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Utilize an AI solution to create a centralized, AI-enhanced Knowledge Management system to support case workers by providing quick access to relevant SOPs, policy guidance, workflows, forms, and call scripts | The goal is to improve consistency, reduce research time, and enhance the quality of appeal processing. | Standardized appeal processing templates, centralized reference materials, and automated case data summaries that provide consistent information, reduce the need for manual research, and support higher-quality decision-making. | Standardized appeal processing templates, centralized reference materials, and automated case data summaries that provide consistent information, reduce the need for manual research, and support higher-quality decision-making. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | CMS Chat | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | Workforce productivity and operational efficiency | The expected benefits include significant productivity gains and enhanced operational efficiency through streamlined work. | CMS Chat generates text-based responses including drafted content for emails and reports, document summaries and analysis, synthesized findings, brainstormed ideas, and answers to queries - all delivered through a conversational interface . | 24/12/2026 | c) Developed with both contracting and in-house resources | Skyward IT Solutions | Yes | CMS Chat generates text-based responses including drafted content for emails and reports, document summaries and analysis, synthesized findings, brainstormed ideas, and answers to queries - all delivered through a conversational interface . | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OFM | Artificial Intelligence Use Case Pilot in Medical Review on Medicare Fee-For-Service (FFS) Improper Payment Measurement (Comprehensive Error Rate Testing Program) Data | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | OFM is hoping to leverage cutting-edge AI technology to potentially introduce new efficiencies and accuracy in the Comprehensive Error Rate Testing (CERT) medical review process by moving away from manual medical review which is costly, sometimes inaccurate, and inefficient. | Expected benefits include cost savings, new efficiencies and accuracy of clinical medical review decision making for reporting meaning Medicare FFS improper payment improper payments. | Recommendation | Recommendation | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Content Analysis POC | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Content analysis POC for the automated analysis of qualitative data including comments, complaints, stakeholder interviews, surveys, etc. | The content analysis POC will enhance productivity and operational efficiency by automating the analysis of qualitative data from comments, complaints, stakeholder interviews, and surveys, enabling CMS staff to process large volumes of feedback more quickly and systematically to improve healthcare programs and services. | The AI system outputs coded qualitative data and thematic analysis results, identifying patterns, themes, and insights from comments, complaints, stakeholder interviews, and surveys to support evidence-based decision-making. | 25/07/2026 | b) Developed in-house | No | The AI system outputs coded qualitative data and thematic analysis results, identifying patterns, themes, and insights from comments, complaints, stakeholder interviews, and surveys to support evidence-based decision-making. | No | k) None of the above | Yes | ||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Enterprise Architecture LLM for CMS Regulatory Content | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Enhance the Enterprise Architecture (EA) Knowledgebase by creating a centralized, searchable repository of CMS regulatory knowledge, and make complex regulatory information available within the EA environment. | Provides insight into how a law, regulation, policy or guidance will impact CMS programs, business functions, stakeholders, and systems. | Information for, and relationships between CMS systems, business functions, and regulatory information | Information for, and relationships between CMS systems, business functions, and regulatory information | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CMCS | T-MSIS Prima | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | T-MSIS program objectives, facilitates data driven decisions, and IT acceleration | T-MSIS Business Outcomes Acceleration: Team Sprint Velocity Acceleration, Faster IT feature delivery, High-level Task Automation, Better Customer Outcomes Initially faster responses to customer assistance | Code, Documentation, Communications, Analysis/Research | Code, Documentation, Communications, Analysis/Research | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CM | Health Plan Management System - Complaint Tracking Analysis | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Support the analysis of MA and Part D beneficiary complaints data. | Enhance CMS' understanding of beneficiary issues with MA and Part D plans. | Complex analysis of a large set of complaint data to identify trends in order to facilitate casework activity. | Complex analysis of a large set of complaint data to identify trends in order to facilitate casework activity. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OFM | Knowledge Management Solution | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This AI-driven system addresses a critical challenge in government operations - preserving institutional knowledge. | Key benefits include knowledge preservation by capturing and maintaining complex expertise, and resource development that helps onboard new staff and upskill existing team members. Additionally, operational efficiency is enhanced by enabling teams to accomplish more with existing resources. Continuity is ensured by reducing knowledge loss when experienced staff transition. | Contextual answers and recommendations and reference documentation | Contextual answers and recommendations and reference documentation | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OFM | ASSIST Tool | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This tool focuses on strategic alignment and mission effectiveness across the team. | This tool focuses on strategic alignment and mission effectiveness by ensuring that work activities directly support CMS's strategic objectives through Strategic Framework Integration. It maintains organizational direction and priorities with a Mission Focus. Furthermore, it drives performance improvements and best practices through Operational Excellence and demonstrates how daily work contributes to broader CMS goals with accountability. | The output provides analysis and recommendations based off of stated operational achievements. | The output provides analysis and recommendations based off of stated operational achievements. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OFM | Executive Order Gap Analysis Tool | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | A specialized compliance and analysis solution that analyzes Executive Orders and program impact. | The solution assists the program in compliance and analysis. It systematically reviews new Executive Orders and compares them against existing business and technical requirements. It identifies gaps where current processes might need adjustments and supports compliance to ensure CMS operations align with federal mandates. | Provides recommendations and gap analysis. | Provides recommendations and gap analysis. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OFM | Contract Invoice Analyzer Assistant | a) Pre-deployment The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Uses AI and automation to provide a more comprehensive review of contract invoices, mitigating waste, fraud, and abuse. | Contract Invoice Analyzer is an AI-powered financial oversight tool designed to provide a comprehensive analysis of contract invoices. It automates part of the review process, identifying potential patterns of waste, fraud, and abuse. Additionally, it helps optimize contract spending and oversight. | Provides recommendations and analysis | Provides recommendations and analysis | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | AI Code Assistance WETG GitHub CoPilot Proof-of-Concept | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | Coding assistance | Cost savings, improved efficiency and quality | Suggestions | 25/08/2026 | a) Purchased from a vendor | No | Suggestions | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | AI Search in Google Vertex (discovery) | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Improved search | Improved customer experience | Suggestions | Suggestions | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | AI in Slack | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | Improved Slack | Improved operations | Suggestion | 25/07/2026 | a) Purchased from a vendor | Yes | Suggestion | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | AI in Figma Make | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | Rapid prototyping | Improved speed and efficiency | Suggestion | 25/07/2026 | a) Purchased from a vendor | No | Suggestion | No | k) None of the above | No | ||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | AI Language Translation Support in Smartling | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | Improved translation | Cost savings, improved efficiency and quality | Suggestion | 24/09/2026 | a) Purchased from a vendor | No | Suggestion | No | Unpublished | k) None of the above | No | Unpublished | ||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | AI Web Content Search Engine Optimization (SEO) in Drupal | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Search engine optimization | Cost savings, improved efficiency and quality | Suggestion | Suggestion | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | Medicare AI Customer Insights | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | Improved customer insights | Cost savings, improved efficiency and quality | Suggestion | 25/08/2026 | c) Developed with both contracting and in-house resources | commonFont and AWS | Yes | Suggestion | N/a | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | Medicare AI Drug Search | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Government Benefits Processing | Pilot | c) Not high-impact | Not high-impact | Generative AI | Improved customer experience | Improved access to government benefits | Suggestion | 25/08/2026 | c) Developed with both contracting and in-house resources | Oddball and AWS | Yes | Suggestion | Yes | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | Marketplace Qualified Health Plan Benefit AI Assistant (discovery) | a) Pre-deployment The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Improved customer experience | Improved benefits selection | Suggestion | Suggestion | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OC | AI Automation within the WETG Web Help Service Desk (discovery) | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Improved help desk service | Cost savings, improved efficiency and quality | Suggestion | Suggestion | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/EPRO | EPRO HTA Reporting | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Text Network Analysis POC | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | RFI Comment Analysis POC | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Addresses the challenge of efficiently analyzing and extracting insights from large volumes of comments in response to an RFI. | The AI solves the challenge of efficiently conducting qual data analysis on RFI comments. | The system produces coded data and conducts thematic analysis. | 25/07/2026 | b) Developed in-house | Yes | The system produces coded data and conducts thematic analysis. | No | k) None of the above | Yes | ||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | AI Agent Orchestrator POC | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI Agent Orchestrator solves the problem that current chatbots are limited to question-and-answer interactions and cannot orchestrate complex, multi-step analytical workflows that require calling external data science notebooks and tools in the correct sequence to solve sophisticated problems. | The AI Agent Orchestrator will transform CMS's analytical capabilities by enabling any staff member to execute sophisticated, multi-step data science workflows through natural language interactions, eliminating the current barrier of requiring specialized technical expertise to access complex analytical tools and notebooks. | The output is a natural language interface that enables users to easily interact with complex data science tooling and analytical workflows without requiring specialized knowledge of the underlying systems | 25/07/2026 | c) Developed with both contracting and in-house resources | Noblis | Yes | The output is a natural language interface that enables users to easily interact with complex data science tooling and analytical workflows without requiring specialized knowledge of the underlying systems | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | MCP Server Registry and Integration Platform | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | The MCP Server Registry solves the problem that CMS teams currently lack a centralized, standardized platform to develop, share, and integrate specialized AI tools and capabilities across the organization | The MCP registry and integration platform will support agentic AI tool development across CMS by enabling any team to build and share specialized capabilities through standardized MCP servers | The AI system outputs a centralized registry platform that enables CMS teams to discover, register, and integrate MCP servers through standardized protocols. | The AI system outputs a centralized registry platform that enables CMS teams to discover, register, and integrate MCP servers through standardized protocols. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Agentic Web Search | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | The AI-powered web search agent solves the problem that CMS's custom chatbot, CMS Chat currently lacks internet access and is limited to training data with a cutoff date, preventing staff from accessing real-time information | The web search agent will significantly enhance the chatbot's capabilities by providing access to real-time information | The agent will pull real-time web search results into CMS Chat context | The agent will pull real-time web search results into CMS Chat context | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Deep Research Multi-Agent System | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | The Deep Research multi-agent system solves the problem that CMS employees currently cannot conduct research directly within CMS Chat. | The Deep Research system will enable CMS employees to conduct, multi-faceted research directly within CMS Chat by automatically decomposing complex queries into targeted subqueries, searching across web and internal data sources simultaneously | The AI system outputs research reports delivered directly within CMS Chat's conversational interface | The AI system outputs research reports delivered directly within CMS Chat's conversational interface | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | 508 Compliance Review MCP Server | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The 508 Compliance Review MCP server solves the problem that CMS staff currently must manually review documents for Section 508 accessibility compliance, which requires specialized knowledge of accessibility standards, is time-consuming, and may result in inconsistent evaluations | The 508 Compliance Review system will enable CMS staff to automatically evaluate documents for accessibility compliance through CMS Chat and other systems | The AI system outputs detailed 508 compliance assessments delivered through CMS Chat and possibly other systems. | The AI system outputs detailed 508 compliance assessments delivered through CMS Chat and possibly other systems. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | PubMed Literature Review MCP Server | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Agentic AI | The PubMed Literature Review MCP server solves the problem that CMS teams need to carry out various types of literature reviews, commonly using PubMed data, but lack an internal tool for doing this at greater scale than manual effort allows | Enable CMS teams to conduct literature reviews at scale through an internal tool that automates PubMed data analysis and synthesis | Literature reviews and research syntheses that combine PubMed data with other contextual information | 25/08/2026 | c) Developed with both contracting and in-house resources | Skyward IT Solutions | Yes | Literature reviews and research syntheses that combine PubMed data with other contextual information | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | SAM.gov RFI Comment Analysis POC | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The SAM.gov RFI Comment Analysis POC solves the problem that CMS procurement staff currently must manually review and score vendor responses to Requests for Information (RFI) posted on SAM.gov, which is time-intensive, especially when there are many questions and many vendors responding. | The RFI Comment Analysis POC will enable CMS teams to systematically evaluate vendor responses using AI-powered analysis, improving the efficiency of assessments. | The AI system outputs vendor scoring and ranking reports | 25/08/2026 | b) Developed in-house | Yes | The AI system outputs vendor scoring and ranking reports | No | k) None of the above | Yes | ||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Plan Justification Tool | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Marketplace) Identify trends/patterns within plan certification justification templates, containing free-form text data through use of natural language processing (NLP) and classification techniques. | Build a supervised ML model using historical justification data and associated plan certification outcomes that can be recommended to CMS to build towards a more efficient review process. | Improve efficiency of the plan certification review process - by automating the initial justification review along with a rendering of a verdict and then looping the human in for the final decision. | Improve efficiency of the plan certification review process - by automating the initial justification review along with a rendering of a verdict and then looping the human in for the final decision. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | TIC URL Automation | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Marketplace) Issuers applying for QHP (Qualified Health Plan) certification, including issuers offering off-Exchange SADPs (stand alone dental plans), must submit a Transparency in Coverage URL in MPMS that leads to a page on the issuer's website where required information is posted. The current review process requires reviewers to manually read and verify the language presented on each TIC URL for compliance, which is both time-consuming and labor-intensive. | AI can expedite the process by enabling AI algorithms to rapidly scan URL content, to identify if the required language is present and compliant, reducing the review time compared to manual reading. Additionally, AI can manage and review large volumes of URL content, further accelerating the review process. | Reduce manual review, apply uniform review criteria across all URLs, ensuring that the review is consistent and free from the variability that can occur across multiple reviewers | Reduce manual review, apply uniform review criteria across all URLs, ensuring that the review is consistent and free from the variability that can occur across multiple reviewers | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Rx Data Integrity Review | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The current process for conducting the Rx Data Integrity review is time intensive and requires reviewers to manually review online formularies in a condensed timeline. The purpose of this proposal is to develop a new process, which will automate as much of the review as possible, using existing language models as well as large language models (LLM) to introduce efficiencies and increase data accuracy. | A Python-based automation pipeline will download PDFs and extract required Rx information in appropriate format for review | automate as much of the review as possible, and increase data accuracy | automate as much of the review as possible, and increase data accuracy | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | RA and RADV Predictive Modeling and Methodology Evaluation | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Marketplace) Risk adjustment data validation (RADV) is the audit of HHS-operated Risk Adjustment (RA). KPMG, the main RADV contractor, seeks to conduct predictive modeling and simulations of proposed policy changes. KPMG also validates the RA models (ML-based) and evaluates the RADV methodology. | Predictive modeling and simulations of policy changes are used to determine likelihood of various outcomes across markets, by individual issuer, etc. Model validation and methodology evaluation determine effectiveness, fairness, and impacts of the programs. | Better decision-making ability for proposed policy changes. Ensure integrity and validity of RA models, which are complex ML-based models. | Better decision-making ability for proposed policy changes. Ensure integrity and validity of RA models, which are complex ML-based models. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | RADV Medical Record Review | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Marketplace) RADV entails the review of enrollees' medical records to determine if diagnoses submitted to CCIIO for purposes of calculating RA transfers actually exist. This medical record review is time- and resource-intensive for KPMG (and therefore CCIIO). | Improve efficiency of medical record review by flagging diagnoses for KPMG coders to review. | Reduce time (and therefore costs) spent on RADV medical record review | Reduce time (and therefore costs) spent on RADV medical record review | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Issuer Expansion/Market Entry Prediction | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | CCIIO FO conducts targeted outreach to issuers likely to enter the markets or expand to other states | Predictive models using AI/ML and over 300 signals derived from data determine the likelihood of expansion or new entry into the markets in the future by issuer or parent-company | Better decision-making ability and strategy for CCIIO to conduct outreach to market entrants/expanders | Better decision-making ability and strategy for CCIIO to conduct outreach to market entrants/expanders | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | SBC Content Review | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | (Marketplace) The SBC Content Review identifies cost-sharing discrepancies between the Plans and Benefits Template and SBC Form. If there are discrepancies in the numerical values input in the two data sources, reviewers must manually review the "Limitations, Exceptions and Other Information" in the SBC Form to assess if a true discrepancy exists. The purpose of the proposal is to integrate large language models (LLM) to programmatically review qualitative data in the SBC Form. | Improve efficiency of SBC Form review. | Reduce time in identifying true discrepancies. | Reduce time in identifying true discrepancies. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | MFT UI Chatbot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Hub/MFT team manually supports operations related to onboarding, Q/A, file tracking and technical inquiries. ? Monthly Average of inquiries: ? ~800 Email inquires ? ~40 Jira/SNOW Tickets? The Communication is manual via emails, Zoom Meetings, and Slack. This Current process is time-consuming, adds and degrades stakeholder experience.? ? Typical operational hours are Monday through Friday, 8 a.m. to 6 p.m. ET (this doesnt serve Mountain Time, West Coast, Hawaii, and Alaska), followed by XOC escalations. This means non-prod inquiries will be responded to the following business day. ?? We have limited, Quality of Service (QoS) metrics to accurately assess consumer satisfaction at present time. | Implement a chatbot to provide real-time file status updates for external users and tools for internal teams to generate reports and visualize key metrics. | Real-Time File Status includes Detailed Processing Information? FAQs Ability to chat with Live-Agent Survey Form Historical File Tracking Limited CMS Stakeholder Testing Develop Quality of Service (QoS) metrics Self - Learning Performance and Alerting? Rollout to Alpha Partners and Issuers User/Alpha Partners Feedback | 25/08/2026 | b) Developed in-house | Yes | Real-Time File Status includes Detailed Processing Information? FAQs Ability to chat with Live-Agent Survey Form Historical File Tracking Limited CMS Stakeholder Testing Develop Quality of Service (QoS) metrics Self - Learning Performance and Alerting? Rollout to Alpha Partners and Issuers User/Alpha Partners Feedback | Data are statuses of files transfers held in the EFT PostgreSQL database tables | No | PIA not publicly available | k) None of the above | Yes | PIA not publicly available | |||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | MLMS Upgrade MILA from CARLA to Druid | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | New chatbot proposed by CMS MLMS team will be integrated into the existing MLMS application. It aims to improve user experience, designed to maintain or boost deflection rates, and will be hosted within the CMS environment for smooth integration. | - Empower MLMS operations team with full control over chatbot management - Maintain or improve current deflection rate from MILA 1.0 - Reduce help desk costs through a more efficient support solution - Tailor chatbot content to specific MLMS support requirements - Deliver personalized user interactions to enhance experience - Ensure smooth escalation to Tier 1 support when needed | Improve user experience and maintain or boost deflection rates | Improve user experience and maintain or boost deflection rates | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/CCIIO | Interoperability URL Review Automation | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Enable the team to save time and man hours, cost savings | Improve timeliness, Minimize review inconsistencies, Improve quality | Verifying the submitted Interoperability URLs are active Checking that hyperlinks within the Interoperability webpages are active Reviewing URL Content for required standards and language Entering review results into the Interoperability Review Round Workbook Review of URLs submitted for Question 3 that provide conformant technical documentation for the Patient Access API Interoperability URL entry into the Interoperability Review Round Workbook Technical review of a selected subset of the applications Transferring review results from the Interoperability Review Round Workbook to MPMS All Interoperability Justification Forms will be reviewed manually | Verifying the submitted Interoperability URLs are active Checking that hyperlinks within the Interoperability webpages are active Reviewing URL Content for required standards and language Entering review results into the Interoperability Review Round Workbook Review of URLs submitted for Question 3 that provide conformant technical documentation for the Patient Access API Interoperability URL entry into the Interoperability Review Round Workbook Technical review of a selected subset of the applications Transferring review results from the Interoperability Review Round Workbook to MPMS All Interoperability Justification Forms will be reviewed manually | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | C3PO: CMS Comprehensive Cybersecurity and Privacy Optimization | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | Reduce the burden of reviewing lengthy OMB, NIST and HHS policy documents, quicker review of SORN and PIAs | Reducing the response time to new or emerging policies and technologies, the ability to adequately review privacy agreements in the event of incident | Recommendations on policy updates, identification of privacy agreements and the associated systems. | 24/10/2026 | c) Developed with both contracting and in-house resources | Connsci and OpenAI | No | Recommendations on policy updates, identification of privacy agreements and the associated systems. | Publicly available OMB memos, NIST Guidance, CMS policies, CMS SORNs and PIAs | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Integrated Data Repository (IDR) Customer Analytic Environment (CAE) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Provides an AI/ML environment connected to the IDR that enables CAE customers to focus on their AI use cases and business outcomes, without the burden of setting up or maintaining their own IT infrastructure or AI/ML labs or environments. | The CAE provides CMS Centers and Offices with secure, fully managed AI/ML workspaces directly integrated with the IDR, eliminating the need to build or maintain separate infrastructure. This accelerates AI innovation, shortens time-to-insight, and enables scalable adoption of advanced analytics across the agency. | Within the CAE, AI systems produce outputs such as predictions (e.g., forecasting trends, detecting anomalies), recommendations (e.g., suggested actions or risk mitigation strategies), classifications (e.g., grouping records or identifying patterns), and other decision-support insights. These outputs are generated from IDR data within a secure, fully managed environment and are designed to inform and augment human decision-making across CMS programs, not to operate autonomously. | 25/07/2026 | c) Developed with both contracting and in-house resources | GDIT | Yes | Within the CAE, AI systems produce outputs such as predictions (e.g., forecasting trends, detecting anomalies), recommendations (e.g., suggested actions or risk mitigation strategies), classifications (e.g., grouping records or identifying patterns), and other decision-support insights. These outputs are generated from IDR data within a secure, fully managed environment and are designed to inform and augment human decision-making across CMS programs, not to operate autonomously. | CAE is an environment where customers will build their ML models and AI use cases, by leveraging the IDR | Yes | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | IDR Support Bot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | The IDR Support Bot is an AI-powered chatbot that makes it easier for CMS users to get the IDR information they need, when they need it. It answers questions about IDR terminology, tools and services, onboarding steps, available data, training content, key contacts, and morewithout the hassle of submitting a support request or digging through pages of documentation. | Any CMS user with an active ID can use the IDR Support Bot to gain a foundational understanding of the IDR, its data offerings, and how to navigate its resources. The IDR Support Bot leverages retrieval-augmented generation (RAG) technology to provide accurate and contextually relevant responses to user queries. By combining a structured knowledge base with real-time document retrieval, IDR Support Bot ensures that users receive up-to-date and comprehensive information about the IDR. | The IDR Support Bot leverages retrieval-augmented generation (RAG) technology to provide accurate and contextually relevant responses to user queries. | 25/07/2026 | c) Developed with both contracting and in-house resources | GDIT | Yes | The IDR Support Bot leverages retrieval-augmented generation (RAG) technology to provide accurate and contextually relevant responses to user queries. | General information about the IDR, such as terminology, tools and services, onboarding steps, available data, training content, key contacts, and more. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Eligibility & Enrollment Medicare Online (ELMO) Chatbot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | Assist Medicare caseworkers in navigating a fairly complex application | More efficient caseworkers | Answers questions about the ELMO tool, how to navigate it, and where to find information | 25/07/2026 | c) Developed with both contracting and in-house resources | Peraton, Inc. | Yes | Answers questions about the ELMO tool, how to navigate it, and where to find information | The model was trained using CMS general information and ELMO tool documentation | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Production Operations Anomaly Analysis | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identify potential production issues proactively and refer for human analysis | Early identification of issues and improved response time, ensure accuracy of bills | Flags anomalies that stray far from its predictions | 25/06/2026 | a) Purchased from a vendor | Yes | Flags anomalies that stray far from its predictions | Production Operational data, high level billing data, state and agency summarized billing data, beneficiary level billing data in the future (no PII/PHI) | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/CMS/OIT | Case Management Tool Case Creation & Automation | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Agentic AI | Automate and optimize manual case creation for incoming documents/requests and simplify manual tasks | Improved case completion volume, higher caseworker productivity | Creates a case in the system based on a scanned document from the public, automates simple functions, etc. | 25/07/2026 | a) Purchased from a vendor | Yes | Creates a case in the system based on a scanned document from the public, automates simple functions, etc. | No | k) None of the above | Yes | ||||||||||||||||
| Department Of Health And Human Services | HHS/DAB/OS | AI to Improve Public Access to the Administrative Appeals Process | a) Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | More effectively leverage online resources to better assist the public and reduce the number of misfiled appeals submitted to the Departmental Appeals Board (DAB) via electronic filing. | DAB adjudicatory divisions rely heavily on electronically filed (E-filed) appeals. Most appellants access E-filing through the DAB's website. A chatbot on the website will reduce filing errors and improve customer experience by directing appellants to the correct DAB adjudicatory division responsible for deciding their appeal. | Reliable data on appellant filing activity and meta data the agency will use to analyze workloads and allocate resources. | Reliable data on appellant filing activity and meta data the agency will use to analyze workloads and allocate resources. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/DAB/OS | AI Use Policy Tool | a) Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Ensure public confidence in the integrity of DAB decisions by maximizing quality review standards and reducing errors. | The DAB is responsible for issuing fair, impartial, legally correct and defensible determinations which serve as the final decision of the DHHS Secretary. The AI quality review large language model (LLM) tool will use an LLM to create algorithms that run behind our case tracking system to randomly select certain DAB decisions to identify potential quality review issues. The LLM will scan DAB decisions to ensure compliance with quality review standards (e.g., protect PHI, PII and FTI). Benefits include more effective quality review and faster identification of data trends that may require additional analysis. | Analysis of large volumes of data that can be used to address common errors and supports the development of targeted training and job aid. | Analysis of large volumes of data that can be used to address common errors and supports the development of targeted training and job aid. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/DAB/OS | Resources to Assist the Advisory Board In Identifying AI Tools for Use In An Adjudication Environment | a) Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Agency adjudication activities require the analysis of large quantities of data to conduct docket analysis, identify efficiencies in case processing, and conduct 508 compliance required to make decisions available to the public. | Use AI to analyze data received from appellants and interested parties to identify trends, increase efficiency in case processing and improve adjudication outcomes. | Enhanced workload data that can be used to allocate resources to the DAB's various adjudicatory divisions. | Enhanced workload data that can be used to allocate resources to the DAB's various adjudicatory divisions. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: 356H Machine Learning (ML) Facility Supply Chain Role Classification Previously: 356H ML Facility Supply Chain Role Classification | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Manual classification of facility supply chain roles from FDA industry submissions (356h forms) is time-consuming and inefficient | Improve the efficiency of evaluating 356H forms to assess a facility's supply chain role, thereby decreasing the time required for the processing of submissions and enabling faster oversight of drug manufacturing facilities. | Extracts a facility's supply chain role from industry submissions and displays identified roles on a data-stewards screen for human verification and final determination. | 23/05/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Extracts a facility's supply chain role from industry submissions and displays identified roles on a data-stewards screen for human verification and final determination. | Applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: Risk-based FAR Review & Decision Support Previously: Field Alert Reports (FAR) Prioritization Model | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Manual and subjective assessment process for Field Alert Reports (FARs) can lead to inconsistency in prioritizing and assessing reports, potentially leading to ineffective resource allocation where the same level of formality is being applied for all issues, regardless of risk. | Assist Field Alert Report (FAR) reviewers by providing objective intelligence and insights that help prioritize the highest risk reports, potentially reducing response time to high-risk issues while maintaining human oversight of all decisions and leading to Agency resources being used more efficiently. | AI-based machine learning classification of FAR risk into low, medium, and high; provides insights on problem clusters, rare-events, and source variables. | 24/10/2026 | c) Developed with both contracting and in-house resources | In-house (CDER staff) and contractor teams | Yes | AI-based machine learning classification of FAR risk into low, medium, and high; provides insights on problem clusters, rare-events, and source variables. | Internal Field Alert Reports data from LSMV | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Analytics-Driven Supplement Evaluation (ASE) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Exponential increase in post-approval chemistry, manufacturing, and controls (CMC) change submissions, with 80% being Changes Being Effected (CBE-30/0) notifications that may be suitable for systematic analytics-driven evaluation. | This AI use case supports the triage and staff assignment process for the review of post-market Change Being Effected (CBE) supplement submissions, improving review efficiency and consistency while ensuring appropriate regulatory oversight. | Using a Convolutional Neural Network (NN) model, in combination with a rules-based approach, produces an output that helps staff triage CBE submission review | 24/03/2026 | c) Developed with both contracting and in-house resources | In-house (CDER staff) | Yes | Using a Convolutional Neural Network (NN) model, in combination with a rules-based approach, produces an output that helps staff triage CBE submission review | Data submitted in applicants' supplemental submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: FAR-based Facility Signal Detection Tool Previously: Post-market Surveillance Reports Signal Detection and Cluster Analysis. | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Need for proactive detection of quality signals in post-market surveillance reports using statistical process control and topic modeling to identify potential drug quality hazards and mitigate the associated risks. | Identifies proactive quality signals and their associated problem clusters in an objective manner for triage and human review, helping prioritize resources on the higher risk issues and more complex problems. | Identifies problem clusters for the flagged signals in an objective manner for triage/review. | 24/06/2026 | c) Developed with both contracting and in-house resources | In-house (CDER staff) | Yes | Identifies problem clusters for the flagged signals in an objective manner for triage/review. | Internal Field Alert Reports data from LSMV (FAERS tool) | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | MedWatch Dashboard | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Need to proactively identify emerging issues and clusters within MedWatch reports using advanced analytics. | Assist with consistent monitoring and identification of product risks from MedWatch reporting patterns and report content to support review staff in detecting potential safety problems that could affect patients. | Identify product risk signals from MedWatch reports by using time series analysis to flag products and topic modeling to summarize the comments. | 23/04/2026 | c) Developed with both contracting and in-house resources | In-house (CDER staff) | Yes | Identify product risk signals from MedWatch reports by using time series analysis to flag products and topic modeling to summarize the comments. | MedWatch data from LSMV and IQVIA data | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Quality Surveillance Dashboard (QSD) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Need for consistent, data-driven assessment of drug manufacturing facilities and proactive detection of potential quality signals that could indicate problems with drug safety or effectiveness. | Extracts unstructured text from documents and pools that with other available data to form a dashboard that enables consistent assessment of Center for Drug Evaluation and Research (CDER) regulated manufacturing facilities, supporting FDA's oversight of drug quality. | Identifies and extracts keywords/phrases from unstructured documents and presents sentences containing keywords/phrases in context. | 23/03/2026 | c) Developed with both contracting and in-house resources | In-house (CDER staff) | Yes | Identifies and extracts keywords/phrases from unstructured documents and presents sentences containing keywords/phrases in context. | FDA EIR documents | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Annual Report CMC | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Generative AI | FDA receives numerous annual reports from drug manufacturers containing important manufacturing and quality information, but key details can be difficult to locate quickly within lengthy documents. | Objective of this use case was to assist in extracting Chemistry, Manufacturing, and Controls (CMC) changes reported within unstructured annual report industry submission documents to help build a complete repository for downstream analysis and more efficient regulatory review. | Support information extraction from unstructured documents | 24/05/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Support information extraction from unstructured documents | Applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: Application References Previously: Application-DMF Reference | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | FDA receives extensive drug application submissions that contain valuable references to related applications, but these relationships aren't always captured in existing regulatory databases. | Extracts references to Drug Master Files (DMFs) from marketing application submissions including Abbreviated New Drug Applications (ANDA), New Drug Applications (NDA), and Biologics License Applications (BLA). These submission documents may be structured (356H form) or unstructured (electronic Common Technical Document modules 1-4). This pipeline parses content from these documents, extracts DMF references (e.g., ANDA123456 references DMF123456), and exposes the data in a structured format for analysis. | Support information extraction from unstructured documents | 23/12/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Support information extraction from unstructured documents | applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: Extracting DMF Facilities from unstructured documents Previously: DMF (Drug Master File) Facilities | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | FDA receives Drug Master File (DMF) submissions containing valuable information about manufacturing facilities, but this data exists in various document formats that require manual review to compile comprehensive facility information. | Extracts facility references from Type II (Drug Substance) DMF manufacturing submissions (i.e., DMF123456 discloses that it uses Facility X for manufacturing and Facility Y for stability testing). These DMF submissions may include structured documents (3938 form) or unstructured documents (electronic Common Technical Document module 3), enabling more comprehensive oversight of the drug supply chain. | Support information extraction from unstructured documents | 23/06/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Support information extraction from unstructured documents | Applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: Information Visualization Platform (InfoViP) to Support Analysis of FAERS safety reports Previously: Information Visualization Platform (InfoViP) to Support Analysis of adverse event reports | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is intended to support the analysis of FAERS Individual Case Study Reports in post-market safety surveillance by automating duplicate detection, creating temporal visualizations, and classifying reports by information quality. | Information Visualization Platform (InfoVIP) supports post-market surveillance using AI to assist with review and analysis of adverse event reports with advanced visualizations including temporal data and algorithms for detection of duplicate FAERS adverse event reports and classification of reports by level information quality. | Performs Natural Language Processing (NLP) and applying Machine Learning (ML) algorithm to extract data from unstructured case narratives and combine with structured data to support analysis and review of adverse event reports. | 25/08/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | Performs Natural Language Processing (NLP) and applying Machine Learning (ML) algorithm to extract data from unstructured case narratives and combine with structured data to support analysis and review of adverse event reports. | Adverse Event data in FAERS | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CBER | LLM-Assisted VAERS Analyses | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Extraction of relevant information, tabulation of data, find patterns across adverse events reports, and generate hypotheses for further investigations. | Build capacity for and assess the application of a LLM to VAERS (Vaccine Adverse Events Reporting System) to provide reviewers adhoc VAERS queries and efficiently generate customized query outputs. Efficiently generate customized query outputs of VAERS queries for reviewers. | To provide reviewers adhoc VAERS queries. | To provide reviewers adhoc VAERS queries. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: Module 3 Facilities Extraction Previously: Module 3 Faculties | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Generative AI | FDA receives drug application submissions (ANDA, NDA, BLA) that contain important manufacturing facility information in Module 3 documents, but this data requires manual review to identify and organize all facility details. | Objective of this use case was to assist in identifying and extracting all drug manufacturing facilities reported within unstructured module 3 submissions from marketing applications to build a comprehensive inventory of drug facilities for better regulatory oversight. | Support information extraction from unstructured documents | 24/02/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Support information extraction from unstructured documents | applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Packaging Materials and Suppliers | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Generative AI | FDA needs to efficiently identify which Drug Master Files (DMFs) contain specific packaging materials and understand how these materials connect to drug applications, but this information is currently difficult to locate across numerous documents. | Objective of this use case is to extract data from unstructured sources to assist in building an inventory of drug packaging materials and their suppliers to support staff in conducting drug supply chain analysis for better regulatory oversight. | Support information extraction from unstructured documents | 24/09/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Support information extraction from unstructured documents | Applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CBER | Process Large Amount of Submitted Docket Comments | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Deduplication of public comments, and generating draft sentiment analysis and grouping of comments | To enhance the automatic process of dockets, we have created an AI/ML tool in CBER/HIVE that automatically download dockets and process them to accelerate the review of docket comments, significantly improving the efficiency and accuracy of our regulatory processes. Efficiently generate customized query outputs of VAERS queries for reviewers. | To provide reviewers adhoc VAERS queries. | 23/06/2026 | b) Developed in-house | Yes | To provide reviewers adhoc VAERS queries. | Public comments on various FDA dockets | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Real World Data/Evidence | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Generative AI | FDA receives numerous submissions containing real-world data and evidence, but identifying and cataloging these studies across various submission types is time-intensive and critical for regulatory reporting requirements under PDUFA (Prescription Drug User Fee Act). | Assist in identifying industry unstructured submissions containing Real World Data/Evidence (RWD/E) by analyzing parsed content for likely indicators to support congressional reporting and regulatory decision-making. | Supports extracting text from unstructured documents and tagging for documents containing Real World Evidence and Real World Data | 24/10/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Supports extracting text from unstructured documents and tagging for documents containing Real World Evidence and Real World Data | applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Regulatory Starting Material | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Generative AI | FDA needs comprehensive visibility into the upstream supply chain for drug manufacturing, particularly tracking regulatory starting materials and their suppliers across approved and pending drug applications to better understand potential supply chain vulnerabilities. | Assists in extracting Regulatory Starting Materials (RSMs) and their suppliers from unstructured module 3 industry submissions to help create an inventory that will illuminate the upstream supply chain and help FDA identify potential supply chain vulnerabilities. | Support information extraction from unstructured documents | 23/08/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Support information extraction from unstructured documents | applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Resource Capacity Planning | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | FDA needs to accurately predict the volume and complexity of incoming drug application submissions to ensure appropriate staffing and resources are available for timely reviews under the user fee program | Forecasting human drug review program submissions and corresponding FDA workload to support better resource planning and ensure timely review of drug applications that benefit public health. | Forecasts workload submissions across major user fee programs to help support fee setting for the human drug review programs | 20/08/2026 | c) Developed with both contracting and in-house resources | In-house (CDER staff) | Yes | Forecasts workload submissions across major user fee programs to help support fee setting for the human drug review programs | FDA systems including DARRTS and Panorama | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDRH | Supply Chain Resilience Program, Office of Supply Chain Resilience (OSCR) - Foresight | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Estimating potential future demand of medical devices | Forecasting demand of medical devices and supplies. Forecast demand for critical devices during a variety of scenarios (e.g. natural disaster, PHE) | Aids in forecasting demand for critical devices under a variety of scenarios. | 23/04/2026 | b) Developed in-house | Yes | Aids in forecasting demand for critical devices under a variety of scenarios. | Premier transaction data of healthcare facility purchases | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CBER | data can not be publicly disclosed as is proprietary information from submissions | HIVE AI Pilot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to assist in solving several problems related to the review process for INDs. Specifically, it aims to address issues of inefficiency, delays in identifying deficiencies and information overload. By providing recommendations for review disciplines, it helps reviewers to quickly identify potential review disciplines that are required. By identifying grossly deficient submissions early on, it reduces the workload and highlighting key data helps reviewers to focus on higher-level tasks that require their expertise. | Overall, is designed to improve efficiency and effectiveness of the regulatory review process, allowing for quicker and well-informed decision making | The system's output include: 1. review discipline recommendations - automated suggestions for the most appropriate review disciplines for each incoming submission. 2. highlighted key data - generates reports highlighting critical information to facilitate quicker understanding by RPMs and reviewers. 3. Summaries - reports summarizing large documents to potentially accelerate review. | 25/07/2026 | c) Developed with both contracting and in-house resources | SAIC | Yes | The system's output include: 1. review discipline recommendations - automated suggestions for the most appropriate review disciplines for each incoming submission. 2. highlighted key data - generates reports highlighting critical information to facilitate quicker understanding by RPMs and reviewers. 3. Summaries - reports summarizing large documents to potentially accelerate review. | use a dataset of previously submitted IND applications, which provided a comprehensive understanding of the types of data and information included in these submission. Feedback and annotations from experienced RPMs and reviewers on a subset of the historical submission, which helps fine-tune the model's understanding of what constitutes a high-quality submission. | data can not be publicly disclosed as is proprietary information from submissions | Yes | not publicly available | k) None of the above | Yes | the code is not open source, it resides within FDA gitlab repository and is not publicly available | not publicly available | |||||||||
| Department Of Health And Human Services | HHS/FDA/CBER | AI and Vaccine Labeling | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The app is designed to streamline vaccine label review processes, offering several key features to simplify and improve efficiency. It includes MedDRA integration for term searches, comprehensive search across all vaccine documents, and vaccine-specific information retrieval. Additionally, the app provides lookup functionality for approval timelines and active ingredients, as well as tools for detecting duplicates and comparing section content using AI-enhanced technology. | Enhance the vaccine label review process, making it more efficient and effective. | The AI system's output is a list of similarities and differences between vaccine label sections, as well as highlighted changes or updates to vaccine labels. Additionally, it may identify duplicate or similar vaccines and provide recommendations for label revisions or updates. The system generates summarized information about vaccine ingredients, approval timelines, and other relevant details. These outputs would be presented in a user-friendly format, such as tables, charts, or highlighted text, to facilitate easy review and analysis by the user. | The AI system's output is a list of similarities and differences between vaccine label sections, as well as highlighted changes or updates to vaccine labels. Additionally, it may identify duplicate or similar vaccines and provide recommendations for label revisions or updates. The system generates summarized information about vaccine ingredients, approval timelines, and other relevant details. These outputs would be presented in a user-friendly format, such as tables, charts, or highlighted text, to facilitate easy review and analysis by the user. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | https://saritaingest.fda.gov/CDER_Publications_System.html | CDER Publications | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Inefficient manual curation and categorization of publications by CDER authors. It aims to automate the process of organizing publications by focus areas for data call responses and identifying regulatory publications, replacing time-consuming manual review processes. | CDER Pubs is needed for the Science and Research Investments Tracking Archive (SARITA) for reporting of outcomes and to support the prioritization, management, and review of the quality and impact of CDER's science and research investments, helping ensure public accountability for research activities. | Accuracy of AI/ML data curation of publications feed, ability to categorize publications as regulatory or not regulatory, and ability to classify publications | 23/06/2026 | c) Developed with both contracting and in-house resources | NCTR | Yes | Accuracy of AI/ML data curation of publications feed, ability to categorize publications as regulatory or not regulatory, and ability to classify publications | CDER staff research citations from PubMed | https://saritaingest.fda.gov/CDER_Publications_System.html | No | k) None of the above | Yes | ||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | CDER Regulatory Science Research (RSR) Projects AI for Process Control in Advanced Manufacturing | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This project explores the use of AI in advanced pharmaceutical manufacturing as part of an exploratory R&D effort focused on model predictive control strategies. The AI components are used solely in a research context to improve understanding of AI-enabled control systems and inform future regulatory readiness. The project does not involve operational use, decision-making, or direct impact on the public or regulated entities. Therefore, it does not meet the definition of a high-impact AI use case under OMB M-25-21. | Classical/Predictive Machine Learning | Need for better process control in continuous manufacturing and development of soft sensors for real-time release testing strategies. | The outcomes of this work can be used to gain a better understanding of AI in advanced pharmaceutical manufacturing control, identify the associated risks, and help review future submissions involving this technology, ultimately supporting more efficient and reliable drug manufacturing. | The AI model demonstrated remarkable performance in setpoint tracking and disturbance rejection for a digital continuous manufacturing line, underscoring the potential of AI-based control strategies in enhancing product quality and regulatory assessment. | 24/07/2026 | b) Developed in-house | No | The AI model demonstrated remarkable performance in setpoint tracking and disturbance rejection for a digital continuous manufacturing line, underscoring the potential of AI-based control strategies in enhancing product quality and regulatory assessment. | Data was generated using a digital twin of a manufacturing plant developed in-house | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Creating a development network | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is intended to solve the problem of inconsistent data formats and inefficient access to unstructured clinical data across multiple healthcare sites. It aims to standardize EHR and claims data conversion into the Sentinel Common Data Model and develop processes for storing and extracting metadata from free text notes to enable timely execution of future Sentinel surveillance tasks. | Methods project applying Natural Language Processing (NLP) to extract data from clinical notes to use in pharmacoepidemiology studies, improving FDA's ability to monitor drug safety using real-world healthcare data. | Creates a network of organizations that can support development of algorithms and use of AI tools such as Natural Language Processing (NLP). | 22/12/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | Creates a network of organizations that can support development of algorithms and use of AI tools such as Natural Language Processing (NLP). | Electronic Health Record (EHR) | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Developing an Objective and Quantitative Endpoint for Atopic Dermatitis in Pediatric and Adult Populations | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The purpose of this study is to validate the Emerald technology to see if it accurately detects the motion of an individual scratching. The act of scratching would not be considered to have a significant effect on human health and safety, which means this study is not a high-impact AI use case based on the definition in OMB M-25-21. | Classical/Predictive Machine Learning | Intended to solve the problem of lacking objective, quantitative methods to assess nocturnal scratching in children with atopic dermatitis. It aims to create a digital endpoint that can accurately measure scratching behavior to evaluate the efficacy and performance of FDA-regulated treatments for atopic dermatitis and pruritus, addressing an unmet need in clinical assessment. | To advance novel endpoints in drug development, potentially leading to better ways to measure treatment effectiveness for skin conditions affecting children and adults. | A digital endpoint that can accurately measure scratching behavior to evaluate the efficacy and performance of FDA-regulated treatments for atopic dermatitis and pruritus, addressing an unmet need in clinical assessment. | 25/01/2026 | a) Purchased from a vendor | Emerald Innovations | No | A digital endpoint that can accurately measure scratching behavior to evaluate the efficacy and performance of FDA-regulated treatments for atopic dermatitis and pruritus, addressing an unmet need in clinical assessment. | The validation data is being collected as part of the study. | No | k) None of the above | Yes | |||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Drug Shortage Predictive Model | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Drug shortages have increased significantly since 2017 and worsened during COVID-19, creating critical gaps in patient access to essential medications. FDA seeks to develop predictive capabilities to anticipate shortages before they occur. | Help with prevention and mitigation of drug shortages by signaling early risks to a supply chain, potentially ensuring patients maintain access to essential medications. | Prediction of supply events for all CDER regulated application products in the next 12 months | 24/05/2026 | b) Developed in-house | Yes | Prediction of supply events for all CDER regulated application products in the next 12 months | CDER data on submissions, drug shortages, compliance, reviews. External data on sales dollars and volume and product information. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Category Subcategory Classification - Safety Reports Bot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Other | Manual analysis and data entry of safety report submissions is time-intensive and requires staff to review scanned PDFs and determine appropriate categories. | Potentially reduces manual labor in processing safety report submissions, allowing FDA staff to focus on safety analysis and regulatory decision-making rather than data entry tasks. | To predict cat/subcat from IND submissions through the rule setup. | 24/05/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | To predict cat/subcat from IND submissions through the rule setup. | Applicant submissions | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Data Extraction from IND Safety Reports using OCR/AI Technologies | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Manual extraction of data from IND safety reports is labor-intensive and time-consuming for regulatory staff. | Expedited processing of Investigational New Drug (IND) Safety Reports received by the Agency, enabling more rapid regulatory action in response to reported adverse events and better protection of clinical trial participants. | Using ThinkTrends, a COTS tool, extracted data is converted into E2B(R2) format for automatic ingestion into FAERS LSMV | 25/03/2026 | a) Purchased from a vendor | ThinkTrends | Yes | Using ThinkTrends, a COTS tool, extracted data is converted into E2B(R2) format for automatic ingestion into FAERS LSMV | MedWatch 3500 forms and intake system processing logic | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | CDER Style Guide | AI Editing Tool | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | This AI model is designed to detect style and formatting inconsistencies in FDA draft documents by comparing them against the FDA CDER Style Guide standards. It solves the problem of manual quality control inefficiencies and ensures consistent adherence to established documentation standards across all FDA CDER publications. | It significantly reduces the workload for FDA editors by identifying style and formatting issues; however, human review remains essential, especially for important documents, ensuring both efficiency and quality in FDA communications. | AI-identified style and formatting issues | 25/03/2026 | b) Developed in-house | No | AI-identified style and formatting issues | CDER Style Guide | CDER Style Guide | No | k) None of the above | No | |||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Use case package 1: Empirical application of the Sentinel EHR and claims Data Partner network to address ARIA insufficient inferential requests (UC1) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This AI project is designed to solve the problem of determining whether available data sources and analytical methods are suitable for specific pharmacoepidemiologic research questions. It aims to systematically evaluate data fitness-for-purpose and identify viable use cases where protocol-based studies can reliably assess drug safety and effectiveness in real-world populations. | Improved capture of unstructured Electronic Health Record (EHR) data for drug safety studies, enabling FDA to better assess medication safety and effectiveness using real-world healthcare information. | Natural Language Processing (NLP) for Electronic Health Record (EHR) unstructured data extraction | 23/09/2026 | c) Developed with both contracting and in-house resources | In-house (CDER staff) and contractor teams | Yes | Natural Language Processing (NLP) for Electronic Health Record (EHR) unstructured data extraction | Claims-Electronic Health Record (EHR) | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | CI5: Development and refinement of toolkits for routine use in the EHR and claims Data Partner network | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This project is designed to solve the problem of inconsistent or inefficient data analysis capabilities across the EHR and claims Data Partner network. It aims to create standardized, reliable analytical tools that can be routinely deployed across different data partners to improve the consistency, quality, and efficiency of pharmacoepidemiologic analyses within the Sentinel System. | improved confounding control when using Electronic Health Record (EHR) data for drug safety studies, leading to more reliable conclusions about medication risks and benefits in real-world populations. | Regularized machine learning tools (e.g., Least Absolute Shrinkage and Selection Operator (LASSO)-based models) combined with targeted learning methods for improved large-scale covariate adjustment | 23/09/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | Regularized machine learning tools (e.g., Least Absolute Shrinkage and Selection Operator (LASSO)-based models) combined with targeted learning methods for improved large-scale covariate adjustment | Electronic Health Record (EHR) Data Elements | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Use case package 2 (UC2): Empirical application of the Sentinel EHR and claims Data Partner network to enhance ARIA insufficient inferential requests and atypical descriptive requests | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This AI project is designed to solve the problem of translating theoretical innovative methods into practical, real-world applications within the Innovation Center (IC) development network. It aims to create concrete, evidence-based examples that demonstrate how new technologies and approaches can be effectively implemented to address specific pharmacoepidemiologic and drug safety surveillance challenges. | Developing advanced methods including machine learning to address incomplete information in drug safety studies, validating health outcome algorithms using Natural Language Processing (NLP)-assisted chart review, and applying NLP to analyze cannabis-derived product exposures in electronic health records, ultimately improving FDA's drug safety surveillance capabilities. | EHR data elements; EHR-linked to claims data | 23/09/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | EHR data elements; EHR-linked to claims data | EHR data elements; EHR-linked to claims data | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | FE5: Incorporate a range of frequently used engineering features from EHRs into the Sentinel common data model in the Sentinel EHR and claims linked Data Partner network | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This AI project is designed to solve the problem of extracting valuable clinical information trapped in unstructured free-text fields within electronic health records. It aims to create a systematic feature engineering approach that can convert narrative clinical notes and text data into structured, analyzable formats for pharmacoepidemiologic research and drug safety surveillance. | Supports the use of Natural Language Processing (NLP) to extract information on five specific medical concepts from Electronic Health Record (EHR) data and make available in the Sentinel Common Data Model for future drug safety studies, enhancing FDA's surveillance capabilities. | NLP for EHR unstructured data extraction | 23/09/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | NLP for EHR unstructured data extraction | Free-text data from the commercial and development network EHR-claims | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Develop an empirical algorithm to automate negative control identification in Sentinel System using the Data-driven Automated Negative Control Estimation (DANCE) algorithm | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This AI project is designed to solve the problem of optimizing the Data-driven Automated Negative Control Estimation (DANCE) algorithm for real-world implementation in large electronic healthcare database studies. It aims to use plasmode simulation to refine the algorithm's performance and then validate the tailored approach through a multisite test case focused on safety endpoint detection, ensuring the method works effectively across different healthcare data environments. | Supports the use of plasmode simulation to evaluate and tailor implementation of DANCE in settings relevant to large electronic healthcare database studies and to apply the tailored DANCE algorithm to a test case incorporating a safety endpoint in a multisite implementation, improving FDA's ability to detect drug safety signals. | Electronic Health Record (EHR) data | 24/03/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | Electronic Health Record (EHR) data | Electronic Health Record (EHR) | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Support tools that can be used in conjunction with Electronic Health Record (EHR) data, such as machine learning and natural language processing (NLP), and the use of Artificial Intelligence (AI) chart review tools | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This AI project is designed to solve the problem of rapidly responding to urgent or emerging drug safety signals that require immediate attention and coordinated action. It aims to leverage key expertise and resources at the Sentinel Operations Center to quickly address time-sensitive safety concerns that may pose risks to public health. | Supports using AI tools to help with medical chart review for emerging safety needs, enabling faster response to urgent drug safety concerns that require immediate attention to protect public health. | Electronic Health Record (EHR) data | 24/10/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | Electronic Health Record (EHR) data | Electronic Health Record (EHR) | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Sentinel System Task Order to address an Emerging Safety Need | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This AI project is designed to solve the problem of efficiently validating emerging safety signals through chart review when traditional structured data is insufficient. It aims to use NLP-supported tools to extract and analyze information from unstructured clinical notes, enabling faster and more comprehensive chart abstraction and adjudication processes for urgent safety investigations. | This allows FDA to apply Natural Language Processing (NLP) capabilities to extract data from Electronic Health Records (EHRs) as needed to address regulatory gaps around emerging safety needs, enabling faster response to potential drug safety concerns. | Claims and EHR data | 24/04/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | Claims and EHR data | Claims and Electronic Health Record (EHR) | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | FOIA REDACTION (FRED) TOOL | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | FRED should not be considered a high-impact AI use case because: a. It operates as a recommendation system only, with mandatory human review and approval required for all outputs. b. No automated decisions are made - humans retain full control over all redaction decisions. c. It serves as an assistive tool to improve efficiency while maintaining human oversight and accountability for all FOIA compliance decisions. | Generative AI | FRED is designed to support FOIA staff in redacting records more efficiently and consistently. FOIA redaction can be a time-consuming process and experience large back-logs of requested documents. It aims to use AI to analyze, identify, and generate predictions of text for redaction, thereby improving the efficiency of FOIA response processing. | More efficient releases of documents in response to FOIA requests, reduction in back-logs, helping ensure public access to government information while protecting sensitive data appropriately. | FRED produces a PDF with boxes around text that it recommends for redaction along with comments for the redaction code | 25/05/2026 | c) Developed with both contracting and in-house resources | Contractor teams | Yes | FRED produces a PDF with boxes around text that it recommends for redaction along with comments for the redaction code | Completed 483 forms before redaction and versions of those forms after redaction by FDA staff | Yes | k) None of the above | Yes | |||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | AI for Regulatory Review (AIRR) Pilot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | The AI-assisted regulatory review paradigm addresses the inefficiency and administrative burden of manually searching through vast amounts of disconnected sponsor-submitted and FDA-generated documents by integrating three components: AI-powered prompt engineering for streamlined workflows, real-time regulatory data retrieval systems, and automated document formatting capabilities that maintain human oversight while significantly enhancing review efficiency and consistency. | Enables faster and more efficient regulatory reviews by reducing time spent on document searching and information gathering, allowing FDA reviewers to focus on scientific analysis and decision-making. Maintains high review quality while improving consistency across review teams and reducing administrative burden on expert reviewers. | Preliminary administrative or review summary (available in Word format) for reviewers' verification and refinement | 25/05/2026 | c) Developed with both contracting and in-house resources | In-house (CDER staff) and contractor teams | Yes | Preliminary administrative or review summary (available in Word format) for reviewers' verification and refinement | No FDA data were used to train, fine-tune or evaluate | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | AI-assisted Platform for Clinical Pharmacology Review | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI solution addresses inefficiency in the clinical pharmacology review process by reducing time reviewers spend on routine tasks, allowing them to focus their expertise on complex scientific analysis and decision-making that truly requires their specialized knowledge. | The AI integration is expected to enhance agency efficiency by optimizing reviewer time allocation, allowing them to focus on high-value tasks requiring specialized expertise. This leads to improved productivity, reduced delays, and enhanced overall performance. For the public, this translates to more timely and higher quality regulatory reviews of new medications. | AI assisted answers to list of tasks in selected task groups for clinical pharmacology review, including supporting information helping reviewers to identify the source of information from original documents. | AI assisted answers to list of tasks in selected task groups for clinical pharmacology review, including supporting information helping reviewers to identify the source of information from original documents. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | CoreDF | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Other | Current nonclinical review processes require manual extraction and analysis of sponsor findings from lengthy PDF study reports, creating inefficiencies in data quality assessment and regulatory timeline. | Expedites nonclinical review by extracting and organizing key safety findings from study reports, allowing FDA reviewers to focus on scientific evaluation and safety assessment rather than manual data extraction. | Sponsor findings from non-clinical study reports | 25/05/2026 | c) Developed with both contracting and in-house resources | IBM | Yes | Sponsor findings from non-clinical study reports | non-clinical study reports (PDFs) | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | AI for Bioanalytical Study Risk Assessment and Inspection Readiness | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | Challenges in assessing large amounts of analytical/bioanalytical study information for risk assessment and inspection preparation in a short period of time | Efficient and thorough review of bioanalytical portions of pivotal studies, enabling risk assessors and reviewers to identify and address potential issues. This benefit promotes public health by ensuring the welfare of study subjects, and helping the office verify the quality, study integrity, and regulatory compliance of Bioavailability/Bioequivalence (BA/BE) studies supporting CDER-regulated drugs. | Summary including reanalysis, deviations from method SOPs or protocols, inconsistencies or gaps in data reporting, deviations from data acceptance criteria, and deviations from the method validation; description of potential impact on study outcome. Outputs are verified by FDA staff. | 25/06/2026 | b) Developed in-house | Yes | Summary including reanalysis, deviations from method SOPs or protocols, inconsistencies or gaps in data reporting, deviations from data acceptance criteria, and deviations from the method validation; description of potential impact on study outcome. Outputs are verified by FDA staff. | No FDA data were used to train, fine-tune or evaluate | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | AI for Clinical Study Risk Assessment and Inspection Preparation | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | Challenges in assessing large amounts of clinical study information for risk assessment and inspection planning in a short period of time | Efficient and thorough review of clinical portions of pivotal studies, enabling risk assessors and reviewers to identify and address potential issues. This benefit promotes public health by protecting study subjects, and by helping the office verify the quality, study integrity, and regulatory compliance of Bioavailability/Bioequivalence (BA/BE) studies supporting CDER-regulated drugs. | Summary including any inconsistencies, discrepancies, missing information, protocol deviations, unforeseen circumstances, unexpected adverse events, severe or serious adverse events, and modifications to processes or procedures; description of potential impact on study outcome. Outputs are verified by FDA staff. | 25/06/2026 | b) Developed in-house | Yes | Summary including any inconsistencies, discrepancies, missing information, protocol deviations, unforeseen circumstances, unexpected adverse events, severe or serious adverse events, and modifications to processes or procedures; description of potential impact on study outcome. Outputs are verified by FDA staff. | No FDA data were used to train, fine-tune or evaluate | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | AI for Assessing Bioanalytical Study Conduct Alignment with Guidance and Method SOPs | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | Cross-comparing large amounts of bioanalytical data (reports, validation, tabulated data) with the M10 guidance and aligned method Standard Operating Procedures (SOPs) to identify all possible study conduct issues. | Expedited identification of bioanalytical study conduct issues through efficient cross-comparisons between the M10 guidance, method Standard Operating Procedures (SOPs), and bioanalytical study reports/data before and during inspections, helping ensure data quality and regulatory compliance. | Organized summary of bioanalytical study conduct deviations from M10 principles, regulations, and method SOPs, followed by a summary of potential study impact. Outputs are verified by FDA staff. | 25/01/2026 | b) Developed in-house | Yes | Organized summary of bioanalytical study conduct deviations from M10 principles, regulations, and method SOPs, followed by a summary of potential study impact. Outputs are verified by FDA staff. | No FDA data were used to train, fine-tune or evaluate | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | SCANS Facility Role Predictor | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI was used to identify facilities that are either Active Pharmaceutical Ingredient (API) or Finished Dosage Form (FDF) manufacturers that were either missed or misclassified in regulatory tracking systems. | Successfully identified previously missed facilities without manually reviewing documents, improving FDA's ability to maintain comprehensive oversight of the drug manufacturing supply chain | The output is a classification of Non-manufacturer, API, FDF, or API\FDF | 24/07/2026 | b) Developed in-house | No | The output is a classification of Non-manufacturer, API, FDF, or API\FDF | We used 356H documents, which are vendor submitted documents describing facilities used in the production of a drug product. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: Document Room Submission AI-Assisted Categorization Previously: Document Room Submission Auto-categorization | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Current document room submission categorization process is very manual and costly, impacting end-to-end regulatory review acceleration by creating bottlenecks, increasing processing times, and reducing overall efficiency in the review workflow. | This process enhancement optimizes resource allocation by freeing up personnel for higher-value scientific review activities, improves regulatory predictability for industry sponsors through standardized processing, and strengthens FDA's ability to respond effectively to public health priorities while maintaining comprehensive audit trails and compliance standards. | Submission category and subcategory as well as submission metadata | Submission category and subcategory as well as submission metadata | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDER | Renamed: AI-Assisted Drug Review Letter Drafting Previously: Drug Review Letter Generation using GenAI | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This approach to AI-assisted document drafting provides faster implementation with minimal technical requirements, future-proofs document generation capabilities, reduces ongoing maintenance costs, and eliminates technical complexity by replacing code-heavy solutions with dynamic AI prompts. | This approach to AI-assisted document drafting provides faster implementation with minimal technical requirements, future-proofs document generation capabilities, reduces ongoing maintenance costs, and eliminates technical complexity by replacing code-heavy solutions with dynamic AI prompts. | Document content that can be easily embedded either in a PDF/Word document or be shared in an email. | Document content that can be easily embedded either in a PDF/Word document or be shared in an email. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDRH | Medical Data Enterprise Artificial Intelligence (MDE AI) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | Create efficiencies in the regulatory review processes for medical devices; reduce administrative burden to staff and allow them to focus their expertise on scientific and clinical work and not administrative processes | Improved efficiency in the administrative overhead of regulatory review workflows | Outputs support regulatory review and include deficiency text adherence to 4PH, insights to support premarket review, data integrity concerns, signal alerts, etc. | 23/09/2026 | b) Developed in-house | Yes | Outputs support regulatory review and include deficiency text adherence to 4PH, insights to support premarket review, data integrity concerns, signal alerts, etc. | Labeled data sets of FDA-specific premarket and postmarket data | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/CDRH | COMET (Consult Memo Assistant) | a) Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Efficiency of the premarket regulatory review | Improved efficiency in regulatory review workflows using advanced AI tools to leverage institutional knowledge in specific product areas | AI assisted review process analysis with suggested deficiencies | AI assisted review process analysis with suggested deficiencies | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/HFP | Food AI Decision Engine (FAIDE) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Prioritize limited regulatory resources and maximize public health protection. | Reduced regulatory burden on establishments with a lower probability of being violative or causing public health harm; more efficient and effective regulatory oversight. | Probability of being violative per the model's classifier, and whether that probability is above the model's recommended threshold (optimizing sensitivity and specificity). | 23/08/2026 | b) Developed in-house | Yes | Probability of being violative per the model's classifier, and whether that probability is above the model's recommended threshold (optimizing sensitivity and specificity). | Internal FDA sample and inspection data, third-party purchased and open-source data. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/HFP | Warp Intelligent Learning Engine (WILEE) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identify emerging chemical signals and violative food substances by analyzing a large data set in a fraction of the time that it would have taken scientific reviewers to analyze the publications. | By enhancing signal detection and chemical hazard forecasting capabilities, this tool can help anticipate and prioritize hazards, accelerate decision making and proactively mitigate risk to consumers. | A prioritize list of emerging signals and an interactive view of supporting documentation/factors. | 23/03/2026 | c) Developed with both contracting and in-house resources | In-house | Yes | A prioritize list of emerging signals and an interactive view of supporting documentation/factors. | Internally generated data during the premarket review process, web data collated from web crawls and a commercial data aggregator, scientific publications retrieved with API calls, grant data published by NIH. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/HFP | Rapid Intuitive Pathogen Surveillance (RIPS) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identify and prioritize incoming sources of potential foodborne outbreaks, maximizing public health by reducing the time burden on regulators. | Enhanced WGS signal detection capability allowing regulators to catch emerging foodborne outbreaks before it can cause widespread public harm. | Probability that a environmental food source is regulated by the FDA. | 25/02/2026 | b) Developed in-house | No | Probability that a environmental food source is regulated by the FDA. | Publicly available WGS metadata from NCBI. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/HFP | AI-Powered Assistant for Pathogen Detection (AIPD) | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AIPD is designed to address several key challenges in data analysis for foodborne pathogens, such as accessibility barriers for data sources, manual workflow overhead, knowledge gaps in tool selection, and complex project management. | The expected benefits include enhanced data analysis efficiency, improved food safety surveillance, better resource utilization, and knowledge transfer and training. | The AIPD produces AI-assisted data reports | The AIPD produces AI-assisted data reports | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/HFP | Product Label and Text Extraction System (PLATES) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Manual food label data extraction causes slow data accessibility and insights. Manual data processing and standardization causes slow data accessibility and insights. Decentralized food label data limits research and regulatory processing of industry compliance and health impacts. | Reduced burden to HFP reviewers and data scientists reviewing and analyzing food product label data, including ingredient and nutrition research. The capabilities have significantly accelerated the data extraction and entry process, providing standardized and parsed structured data 35.29x faster than the manual process (reducing the manual burden by 97.08%). | The system includes a user interface that allows users to upload food product images to receive extracted, standardized (utilizing FoodTrak standards), and metadata attached, structured data for 30+ key food data elements that can be reviewed and saved , exported, or published to downstream databases. | 24/06/2026 | c) Developed with both contracting and in-house resources | Trigent Solutions Inc, Digitrix LLC | Yes | The system includes a user interface that allows users to upload food product images to receive extracted, standardized (utilizing FoodTrak standards), and metadata attached, structured data for 30+ key food data elements that can be reviewed and saved , exported, or published to downstream databases. | Internal FDA FoodTrak and OLOAS data. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/HFP | Data Ingestion and Content Explorer (DICE) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Multiple stakeholders across HFP have the business need to search for content within artifacts and documents uploaded to various systems. For systems like CARA and CARTS, document search capabilities are limited due to the Appian technology stack utilized by these systems. As the Human Food Data Platform continues to grow, it will also need to provide SMEs with the capability to search within the data platform. Offices need a quicker way to search for content within documents and databases to find relevant data across a multitude of use cases including regulatory and compliance reviews, outbreak response investigation, and research tracking and administration. Additionally, multiple HFP offices have business processes requiring extracting structured data from unstructured documents for data analysis, regulatory reviews, and other business intelligence insights which are currently supported through manual operations. DICE will enable SMEs to obtain properly formatted structured data from unstructured data sources. | Accelerates the time for subject matter experts (SMEs) to find relevant data and content lost within images, hand-written documents, emails, and other artifacts and provides this in in a one-stop-shop user experience. Allows users to search through millions of documents quickly and makes data accessible to everyone in the HFP and not just those who have backend access. Extracting text from these artifacts makes it available for further analysis and natural language processing. Data can be further processed to detect sentiment, entities, key phrases, syntax, and topics. AWS and API based architecture brings a flexible and scalable framework to HFP to facilitate search use cases while enabling a cost-effective solution. Shared infrastructure for unstructured and structured intelligent search capabilities minimizes cost across CFSAN offices who have this same need. | The system includes a user interface that allows users to view returned search results, templatize unstructured documents using the intelligent document processing workflow, and view extracted text with confidence scores from unstructured documents. | 24/07/2026 | c) Developed with both contracting and in-house resources | Trigent Solutions Inc, Digitrix LLC | No | The system includes a user interface that allows users to view returned search results, templatize unstructured documents using the intelligent document processing workflow, and view extracted text with confidence scores from unstructured documents. | CARTS system data, CARA system data. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/OC | Smart Solution for Docket Management (SSDM) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the extremely labor-intensive and time consuming process of manually collating, de-duplicating, and categorizing public comments on FDA dockets. | 1. Significant time and resource savings: The platform aims to save the Agency a substantial number of staff hours by assisting in reducing redundant and time-consuming tasks that can take weeks to complete manually. 2. Enhanced processing capacity: The platform will enable FDA to effectively handle large-scale comment volumes. 3. Improved accuracy and quality: The AI-powered deduplication, topic modeling, and keyword flagging can potentially enhance the overall quality of comment processing while reducing human error in manual sorting. | The AI-enabled tool provides two main outputs: 1) a line listing excel file that organizes comments into groups based on similarity/deduplication thresholds and tags them with AI-identified and SME-approved keywords and topics; and 2) a comment summary word report that provides structured analysis of themes, key performance indicators, and submitter group breakdowns to assist in regulatory decision-making. | 25/07/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | The AI-enabled tool provides two main outputs: 1) a line listing excel file that organizes comments into groups based on similarity/deduplication thresholds and tags them with AI-identified and SME-approved keywords and topics; and 2) a comment summary word report that provides structured analysis of themes, key performance indicators, and submitter group breakdowns to assist in regulatory decision-making. | Public comments on various FDA dockets | No | k) None of the above | Yes | Will explore making the code open source - but not there yet. | |||||||||||||
| Department Of Health And Human Services | HHS/FDA/OC | Elsa GenAI Chat Tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Generative AI | Elsa is designed to support FDA employees by providing clear, accurate information and assistance with work-related tasks. Elsa's primary purpose is to help streamline information access and decision-making processes within the FDA context. | Elsa quickly synthesizes and summarizes information, breaking down complex topics to support faster, more informed decision-making, helps refine communication for maximum impact - from brainstorming and outlining content to drafting and proofreading and helps employees quickly identify key information across multiple sources. | Elsa can generate text-based responses in paragraphs, bullets, or even in tabular format. | 25/06/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Elsa can generate text-based responses in paragraphs, bullets, or even in tabular format. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/FDA/OCS | AI-Assisted Systematic Review and Validation of Analytical Worksheets | a) Pre-deployment The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Manual review and validation of analytical worksheets can create bottlenecks in regulatory processes, introduces potential human error, and limits scalability of quality assurance procedures across FDA operations. Analytical worksheets serve as critical evidence in legal proceedings, enforcement actions, and regulatory decisions affecting public health and safety. These worksheets are the legal documents that will be used and referenced in a court of law and legal proceedings if FDA determines regulatory action should be taken in accordance with the Federal Food, Drug, and Cosmetic Act and subsequent amending supplements codified in Title 21of the United States Code. | Increased efficiency in worksheet validation processes, standardized review procedures ensuring consistency in legal documents, faster turnaround times for analytical work supporting enforcement actions, enhanced quality assurance for evidence used in court, reduced human error in legally significant documents, and improved consistency in regulatory processes supporting FDA's mission to protect public health. | Assessment summaries identifying errors and inconsistencies in court-admissible documents | Assessment summaries identifying errors and inconsistencies in court-admissible documents | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/OCS | AI-Generated Data Processing and Visualization Code Development | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Manual coding for data processing and visualization creates barriers to AI adoption, requires specialized expertise not available across all teams, and limits agency ability to maximize value of existing data investments as directed by OMB M-25-21. | Accelerated development of data processing workflows, increased access to advanced analytics capabilities, reduced dependency on specialized programming skills, improved consistency in data visualization standards, and enhanced agency AI maturity through automated code generation capabilities. | AI assisted code generation for Excel macros and scripts, Power BI formulas and visualizations, data processing algorithms, automated dashboard templates, and reusable code libraries. | AI assisted code generation for Excel macros and scripts, Power BI formulas and visualizations, data processing algorithms, automated dashboard templates, and reusable code libraries. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/OCS | AI-Enhanced FDA Regulated Commodity Consumption Pattern Analysis | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Traditional analytical methods for determining consumption patterns of FDA-regulated commodities are limited in scope and processing speed which hinders comprehensive market surveillance, trend analysis, and evidence-based regulatory science research necessary for informed policy development. | Identification of commodity consumption patterns and trends supporting regulatory science, enhanced understanding of FDA-regulated commodity consumption, improved research capabilities for market surveillance, better-informed policy development through data-driven insights, accelerated evidence generation for regulatory decision-making, and advanced analytical capabilities supporting FDA's public health mission. | Consumption analysis reports, pattern and trend identification reports, trend predictions and forecasting models, market behavior insights and statistical summaries, correlation analyses between consumption patterns and regulatory factors, and research projects supporting real world evidence-based regulatory science. | Consumption analysis reports, pattern and trend identification reports, trend predictions and forecasting models, market behavior insights and statistical summaries, correlation analyses between consumption patterns and regulatory factors, and research projects supporting real world evidence-based regulatory science. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/OCS | AI-Enhanced High-Dimensional Matrix Dataset Trend and Correlation Analysis | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | High-dimensional datasets present analytical challenges that exceed traditional statistical methods' capabilities, limits the ability to extract meaningful insights from complex regulatory data structures, and hinders evidence-based regulatory science advancement. | Discovery of previously unidentified patterns and correlations in regulatory data, enhanced research productivity supporting HHS/FDA legal mandates and mission, improved data utilization efficiency maximizing taxpayer investment, advancement of regulatory science through sophisticated analytical capabilities, and development of innovative approaches to complex data analysis challenges. | Correlation reports identifying key relationships in regulatory data, trend analysis reports supporting regulatory science, pattern identification summaries for complex datasets, statistical significance assessments, and advanced data visualization outputs which informs real world evidence-based decision-making. | Correlation reports identifying key relationships in regulatory data, trend analysis reports supporting regulatory science, pattern identification summaries for complex datasets, statistical significance assessments, and advanced data visualization outputs which informs real world evidence-based decision-making. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/ODT | No | ChatBot for Safety Reporting Portal Adverse Events and Product problems submissions | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Increases data integrity and aid with routing the user to the correct commodity group to submit their adverse event or product problem. DRUID AI, a COTS tool, boasts a comprehensive suite of functionalities, encompassing conversational flows and seamless diverse data sources integrations such as SQL, ServiceNow, UiPath, API, and knowledge base services. Its sophisticated Natural Language Processing and Understanding capabilities empower precise interpretation of user queries across various languages and dialects. | Increased user satisfaction by saving time providing faster form completion and less confusion on where to report their adverse event or product problem. | The AI routes to the correct form and helps the user to complete the report faster and endures data integrity. The outputs uses knowledgebase to answer questions, format responses, routes to correct forms, and uses API to submit the report to SRP without having to use the existing legacy app. | 24/03/2026 | c) Developed with both contracting and in-house resources | Druid | Yes | The AI routes to the correct form and helps the user to complete the report faster and endures data integrity. The outputs uses knowledgebase to answer questions, format responses, routes to correct forms, and uses API to submit the report to SRP without having to use the existing legacy app. | Internal FDA structured and unstructured data from FDA.GOV web crawling | No | Yes | not publicly available | k) None of the above | Yes | No | not publicly available | |||||||||
| Department Of Health And Human Services | HHS/FDA/ODT | Machine Learning as a Service: Translate and extract text from images using AI | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This AI project is designed to solve the problem of reviewing foreign products, extracting ingredient lists, and border inspections | The ability to quickly translate and extract lists of ingredients from foreign food & drug product labels without the need for human translators | Translated text into English, in JSON format | 22/01/2026 | c) Developed with both contracting and in-house resources | Precise | Yes | Translated text into English, in JSON format | Makes use of Google Translation Hub | No | k) None of the above | Yes | https://git.fda.gov/FDA/OIMT/mlaas | |||||||||||||
| Department Of Health And Human Services | HHS/FDA/ODT | Machine Learning as a Service: Extract data from product labels, business forms, and image files | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This AI project is designed to assist in capturing data submitted to & reviewed by the FDA by extracting text and data, determining its structure, and saving the information in a more useful format | The ability to extract information such as nutrition information and ingredients from product labels, tabular data, invoices and receipts, and handwritten forms without the need to have users retype or copy & paste the data into FDA applications. | Parsed text and data structured in JSON format, with key/value pairs where appropriate (such as for specific fields in a form or nutrient name & amount on a product label) | 22/01/2026 | c) Developed with both contracting and in-house resources | Precise | Yes | Parsed text and data structured in JSON format, with key/value pairs where appropriate (such as for specific fields in a form or nutrient name & amount on a product label) | Makes use of Google Translation Hub | No | k) None of the above | Yes | https://git.fda.gov/FDA/OIMT/mlaas | |||||||||||||
| Department Of Health And Human Services | HHS/FDA/OII | Filer Evaluation prioritization using risk-based decision Machine Learning approach | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The FDA's Office of Information Operations (OIO) has an opportunity to enhance its evaluation capabilities across over 4,000 filers in the current FDA inventory by implementing a systematic, data-driven approach to risk assessment and prioritization. By developing standardized evaluation processes and integrated analytical tools, OIO can optimize resource allocation, improve consistency in risk identification, and strengthen the FDA's capacity to effectively protect public health through targeted regulatory oversight. | OIO is responsible for filer evaluations. There are over 4,000 filers in the current FDA inventory and this ML-based risk scoring approach to identify high-risk filers reduces the burden of sorting through the information manually and provides a standard process for conducting evaluations for the staff. An interactive dashboard has been developed that displays model outputs in various forms for staff use. | The ML-based model provides a complete list of filers with all the relevant information along with their relative risk scores for FDA staff to conduct evaluation of the filers. | 23/01/2026 | c) Developed with both contracting and in-house resources | Precise Software Solutions, Inc. | Yes | The ML-based model provides a complete list of filers with all the relevant information along with their relative risk scores for FDA staff to conduct evaluation of the filers. | Import operations data including but not limited to filer evaluation history, corrections to transmitted data, database lookup failures, filer table record creation dates, PREDICT scores | Yes | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/OII | Electronic translational services for regulatory documents for articles offered for import | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | FDA' Office of Information Operations (OIO) seeks to implement automated translation capabilities for foreign language documents essential to import operations. By integrating translation services directly into OIO staff workflows, the Agency can improve import screening to better serve stakeholders while maximizing staff capacity for core public health protection activities. | The implemented solution automated the electronic translation of inspection/investigation and import documents, labels, industry guidance, materials related to policy and regulatory, presentations and educational records. This automation significantly reduced time spent by staff to translate documents, reduced the need to find a translator to read and understand foreign language documents, increased reliability and timeliness for enforcement actions, increased destruction of misbranded FDA regulated products at the IMFs, and increased the ability to provide regulatory materials in foreign languages. | A translation service interface developed within an imports entry review system that utilizes Google Translate API provides the required translation of the entries entering US supply-chain. This will provide translation to the FDA imports staff without seeking external solutions and is integrated in the current system used by the consumer safety officers who are conducting investigations, import operations etc. | 24/04/2026 | c) Developed with both contracting and in-house resources | Google Translate API / MLaaS / Azure | Yes | A translation service interface developed within an imports entry review system that utilizes Google Translate API provides the required translation of the entries entering US supply-chain. This will provide translation to the FDA imports staff without seeking external solutions and is integrated in the current system used by the consumer safety officers who are conducting investigations, import operations etc. | N/a | Yes | N/a | k) None of the above | Yes | N/a - GitHub code is not open source/publicly available - https://git.fda.gov/FDA/OIMT/mlaas | N/a | |||||||||||
| Department Of Health And Human Services | HHS/FDA/OII | AI-Powered Video Analytics for Law Enforcement | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | AI is used to filter relevant video, but outputs are verified by humans, decisions/actions are performed by humans | Computer Vision | The AI is intended to solve the challenge of manually reviewing large volumes of surveillance video, which is time-consuming, labor-intensive, and prone to human error. | Faster identification of persons, vehicles, and events of interest through AI-powered video search and filtering. Improved accuracy and objectivity in surveillance review and analysis. Increased situational awareness via real-time alerting and behavior detection. Greater operational efficiency, enabling limited staff to manage larger video workloads. Data-driven decision-making supported by trend analysis and visual dashboards. | - Object-level detections: bounding boxes with classifications (e.g., person, vehicle type, animal), attributes (e.g., clothing color, bag, face mask), and movement patterns. - Appearance-based search results: lists of matching individuals or vehicles based on facial features, clothing, or license plate. - Real-time alerts: triggered events based on predefined rules (e.g., line crossing, group formation, presence of a vehicle type), sent via connected systems.- - Visual summaries: Video Synopsis® clips that compress hours of activity into short, layered visualizations for faster review.- - K34Dashboards and analytics: aggregated data on movement, dwell time, crowding, object counts, and traffic patterns to inform operational decisions. | 23/01/2026 | a) Purchased from a vendor | Milestone | Yes | - Object-level detections: bounding boxes with classifications (e.g., person, vehicle type, animal), attributes (e.g., clothing color, bag, face mask), and movement patterns. - Appearance-based search results: lists of matching individuals or vehicles based on facial features, clothing, or license plate. - Real-time alerts: triggered events based on predefined rules (e.g., line crossing, group formation, presence of a vehicle type), sent via connected systems.- - Visual summaries: Video Synopsis® clips that compress hours of activity into short, layered visualizations for faster review.- - K34Dashboards and analytics: aggregated data on movement, dwell time, crowding, object counts, and traffic patterns to inform operational decisions. | Yes | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/FDA/OII | Computer Vision to translate and mine Product Labeling photos to analyze labeling for potential violations | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | The Computer Vision (CV) project aims to automate label extraction (identification of label information) for regulatory compliance, reducing manual effort and improving accuracy in detecting label discrepancies. | Computer Vision | The intent is to reduce the amount of time import operations users spend reviewing product labeling of imported products for violations. | Reduce time to spot violations on imported products increasing efficiency of reviews. | Label Text extraction and violation(s) Detection | Label Text extraction and violation(s) Detection | |||||||||||||||||||||
| Department Of Health And Human Services | HHS/FDA/OII | Intelligent Document Processing to analyze current import entry documentation for potential discrepancies | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The intent is to assist in the process of discrepancy identification between documentation submitted by trade and CBP line data submitted. | By streamlining the manual document review process for standard entry documentation from trade, we can significantly reduce the time required for each line review, freeing up substantial analyst capacity to focus on high-risk shipments that pose greater threats to public health and safety. This efficiency improvement enables better resource allocation, reduces processing bottlenecks, and supports faster clearance times for compliant shipments while maintaining robust oversight. The enhanced operational efficiency directly supports FDA's core mission by enabling more targeted, risk-based resource deployment and improving both trade facilitation and import safety program integrity. | List of discrepancies between document data and CBP line level information | List of discrepancies between document data and CBP line level information | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | fac.gov | AI Audit Resolution Assistant (AIARA) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Agentic AI | The DFI PRF AIARA project leverages AI to streamline the single audit process. The team utilizes the Robotic Process Automation (RPA) to extract data from the Federal Audit Clearinghouse to create templates for both the audit notification and management decision letters, and to generate a comprehensive report of the auditee and its findings. | Since its launch, the AIARA has successfully processed and resolved 73 audits. Automation has resulted in an estimated total of 276 hours of work saved. The AIARA has significantly enhanced both efficiency and consistency in the audit resolution process. | HRSA utilizes generative AI to streamline the Single Audit resolution process and the creation of Management Decision Letters to formally close out and resolve Single audit findings. The AI Audit Resolution Assistant (AIARA) includes a vector database comprised of Single audit documents assigned to HRSA, creates a retrieval augmented generation which integrates with a large language model to intelligently summarize audit findings and recommendations, and provides chatbot capability and reduce the cognitive load on HRSA auditors for any audit-specific questions. | 24/07/2026 | c) Developed with both contracting and in-house resources | Mindpetal | Yes | HRSA utilizes generative AI to streamline the Single Audit resolution process and the creation of Management Decision Letters to formally close out and resolve Single audit findings. The AI Audit Resolution Assistant (AIARA) includes a vector database comprised of Single audit documents assigned to HRSA, creates a retrieval augmented generation which integrates with a large language model to intelligently summarize audit findings and recommendations, and provides chatbot capability and reduce the cognitive load on HRSA auditors for any audit-specific questions. | Single Audit report from FAC.gov | fac.gov | No | No | k) None of the above | Yes | No | ||||||||||
| Department Of Health And Human Services | HHS/HRSA | Knowledge Navigator | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | The objective is to develop an AI model that can answer detailed and complex questions about the key programmatic document, Application and Program Guidance (APG), that is issued annually before the application cycle opens. The PoC LLM has Applicant and Program Guidance (APG) documents for 10 loan repayments and scholarship programs. | This will allow loan repayment and scholarship analysts and call center agents to better respond to public inquiries from applicants and participants | BHW has implemented a proof of concept Generative AI (GenAI) Large Language Model (LLM) Knowledge Navigator (KN) to support National Health Service Corps (NHSC) and Nurse Corps loan repayment and scholarship program analysts and call center agents respond to program applicants and participants. | 24/06/2026 | c) Developed with both contracting and in-house resources | Publicis Sapient | Yes | BHW has implemented a proof of concept Generative AI (GenAI) Large Language Model (LLM) Knowledge Navigator (KN) to support National Health Service Corps (NHSC) and Nurse Corps loan repayment and scholarship program analysts and call center agents respond to program applicants and participants. | Existing program application guidance documentation (APGs) | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Medical records summarization | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI will reduce the amount of time required to review and evaluate a claim. | The AI will take medical records number in the thousand and produce a summary document of key element to conduct a claims review. | AI will facilitate the collection and preprocessing of unstructured data, and create a condensed (and indexed) document for AI to intelligently review the thousands of pages of medical/legal documents, as part of the claims review process. | AI will facilitate the collection and preprocessing of unstructured data, and create a condensed (and indexed) document for AI to intelligently review the thousands of pages of medical/legal documents, as part of the claims review process. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | NOFO Compliance Assistant | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Other | HRSA leadership would benefit from an automated solution to evaluate HRSA Notice of Funding Opportunities (NOFO) against dynamically-changing Executive Orders (EO) and OMB memos to ensure NOFO compliance with White House priorities. | This solution will enhance operational efficiency by reducing document creation time from weeks to days while ensuring more consistent, error-free NOFO's through automated quality assurance. The solution is intended to provide more consistent, readable NOFOs and reduce barriers for smaller organizations. | The NOFO Compliance Assistant is an innovative application utilizing large language models (LLMs) to generate first drafts of key policy documents. Inputs can include example documents, style guides, and key policy decisions and other documents in the Knowledge Navigator. The NOFO Compliance Assistant also features an editing tool that scrutinizes drafts for inconsistencies or errors, offering feedback for refinement. Current planning leverages Cloud-based services, LLM, Text processing and analysis tools, Natural Language Generation (NLG) and Text Analysis for the implementation. | The NOFO Compliance Assistant is an innovative application utilizing large language models (LLMs) to generate first drafts of key policy documents. Inputs can include example documents, style guides, and key policy decisions and other documents in the Knowledge Navigator. The NOFO Compliance Assistant also features an editing tool that scrutinizes drafts for inconsistencies or errors, offering feedback for refinement. Current planning leverages Cloud-based services, LLM, Text processing and analysis tools, Natural Language Generation (NLG) and Text Analysis for the implementation. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | PRF Program Chatbot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Answer the inquiries related to GAO, FOIAs, and Litigation quickly and efficiently | PRB leadership is equipped to manage the PRF program effectively, by responding to adhoc inquiries that enhances the customer experience while adapting to current reductions in staff | The app will use retrieval augmented generation (RAG) to answer questions using a knowledge base composed of PRB's programmatic SME documents. The responses will be sourced and cited from program documents so that they are verifiable. This technology will help scale staff access to deep program information and ease the significant burden of key staff turnover who possess historical and institutional knowledge by ingesting their key work products into the AI application. | 25/02/2026 | c) Developed with both contracting and in-house resources | GDIT Inc | Yes | The app will use retrieval augmented generation (RAG) to answer questions using a knowledge base composed of PRB's programmatic SME documents. The responses will be sourced and cited from program documents so that they are verifiable. This technology will help scale staff access to deep program information and ease the significant burden of key staff turnover who possess historical and institutional knowledge by ingesting their key work products into the AI application. | The program is using AWS Bedrock for GenAI services, powered by the Claude 3 Haiku LLM. The knowledge base for the RAG architecture is built on over 2,000 PRF program-specific user guides and documentation. | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Scholar Match | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The current candidate evaluations and placement process is complex and challenging for both analysts and participants. By enhancing this process with AI/ML support could optimize resource allocation and candidate satisfaction, significantly impacting workforce distribution and efficiency in critical health areas. | This would improve the process of matching NHSC and Nurse Corps Scholars going into clinical service in underserved communities. | The Scholar Match (SM) leverages AI to enhance the placement process of NHSC and Nurse Corps scholars in communities of need across the U.S. and territories. By analyzing candidate profiles and regional needs, SM recommends optimal placements, ensuring both the fulfillment of organizational needs and the satisfaction of the candidates. Current planning leverages Machine Learning, Recommendation Systems, Cloud-based platforms and Data analytics services for the implementation. | The Scholar Match (SM) leverages AI to enhance the placement process of NHSC and Nurse Corps scholars in communities of need across the U.S. and territories. By analyzing candidate profiles and regional needs, SM recommends optimal placements, ensuring both the fulfillment of organizational needs and the satisfaction of the candidates. Current planning leverages Machine Learning, Recommendation Systems, Cloud-based platforms and Data analytics services for the implementation. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Scholarship Insight | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The NHSC and Nurse Corps Scholarship reviewers need to receive and score thousands of essays submitted and the manual process delays the application processes. We are looking forward to building a Generative AI program that looks at all of the previous essays and how the scoring rubric was done and train it to do the first cut at scoring the essays. This could enhance the fairness and efficiency of scholarship evaluations, ensuring a thorough review process that supports equitable student opportunities. | Reduce the amount of time required for internal or external reviews to evaluated NHSC and Nurse Corps scholarship applications and ensure that human reviewers are following the appropriate scoring rubric. | The Scholarship Insight (SI) is designed to support the evaluation of scholarship essays for both the NHSC and Nurse Corps Scholarship Programs by providing detailed analysis to human graders. Aligning with directives to ensure human oversight, SI identifies key themes, strengths, and weaknesses in essays, facilitating a more informed grading process without replacing human judgment. Current planning leverages Cloud-based services, Natural Language Understanding (NLU), Text Analysis, LLM API and Data analysis tools for the implementation. | The Scholarship Insight (SI) is designed to support the evaluation of scholarship essays for both the NHSC and Nurse Corps Scholarship Programs by providing detailed analysis to human graders. Aligning with directives to ensure human oversight, SI identifies key themes, strengths, and weaknesses in essays, facilitating a more informed grading process without replacing human judgment. Current planning leverages Cloud-based services, Natural Language Understanding (NLU), Text Analysis, LLM API and Data analysis tools for the implementation. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Site Application Analysis | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The current application review process requires internal analysis to review many complex documents to determine Site Eligibility to employ a member of the NHSC or Nurse Corps. By enhancing this process with AI/ML support could revolutionize and streamline the NHSC, Substance Use Disorder Treatment and Recovery (STAR)and Nurse Corps Site Application Process. | This would improve the current highly manual process for reviewing Site applications for internal analysts but ensure that clinical sites are being approved for NHSC, Nurse Corps, and STAR LRP. | The Site Application Analysis (SA) is designed to support the evaluation of NHSC, STAR and Nurse Corps Site Applications. SA will allow for faster more accurate review of Site Applications and allow BHW Regional Analysts to focus on a higher value task. Current planning leverages Cloud-based services, Machine Learning, Recommendation Systems, Natural Language Understanding (NLU) and Text Analysis for the implementation. | The Site Application Analysis (SA) is designed to support the evaluation of NHSC, STAR and Nurse Corps Site Applications. SA will allow for faster more accurate review of Site Applications and allow BHW Regional Analysts to focus on a higher value task. Current planning leverages Cloud-based services, Machine Learning, Recommendation Systems, Natural Language Understanding (NLU) and Text Analysis for the implementation. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | RefineAI - Enhanced Summary Statements | a) Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | DIR would benefit from an automated system that would correct or solve grammatical issues, redundant comments, and acronym expansion at the merging stage within ARM to increase efficiency in panels and devote more time to discussion with hopes of more useful feedback without the pressure of time constraints due to editing. At least ten percent of overall ORC panel time is devoted to editing in real time for grammatical and duplicative issues. On average, discussion of an application takes 60 minutes; if we save 6 minutes of editing time per application, we can lower hourly rates for contractor support and lower hourly rates for editing post panel. In addition, we can shift focus in panel to content-specific discussion promoting higher quality feedback. | The expected benefits of the automated solution are: - Increasing efficiency during ORC panel discussions, ultimately leading to lower contractor support and ORC costs, by providing a merged summary statement that highlights duplicate comments, expands acronyms, and proposes grammatical fixes at the merging state to devote more time to content-specific feedback in panel discussions - Increasing quality of feedback to applicants | The DIR staff works directly with business owners to set up Objective Review Committees (ORC). Currently, the logistics contractor pulls raw comments from the Application Review Module (ARM) submitted in ARM by the three primary reviewers. The raw comments are sent to reviewers and HRSA staff in the merged summary statement in advance of the ORC for awareness and to facilitate discussion in the ORC panel. However, the Summary Statement Operator (SSO) must correct grammar issues, spell out acronyms, and change to present tense in real time as well as remove duplicates. DIR would like an automated system that would correct or solve these issues at the merging stage to increase efficiency in panels and devote more time to discussion with hopes of more useful feedback without the pressure of time constraints due to editing. | The DIR staff works directly with business owners to set up Objective Review Committees (ORC). Currently, the logistics contractor pulls raw comments from the Application Review Module (ARM) submitted in ARM by the three primary reviewers. The raw comments are sent to reviewers and HRSA staff in the merged summary statement in advance of the ORC for awareness and to facilitate discussion in the ORC panel. However, the Summary Statement Operator (SSO) must correct grammar issues, spell out acronyms, and change to present tense in real time as well as remove duplicates. DIR would like an automated system that would correct or solve these issues at the merging stage to increase efficiency in panels and devote more time to discussion with hopes of more useful feedback without the pressure of time constraints due to editing. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | FOIA Exemption-Aligned Redaction | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | The FOIA staff will leverage this solution to review records in response to FOIA requests, search the internet for publicly available information (names, email addresses, contact information of federal and non-federal individuals), and identify personally identifiable information (PII). The goal is to reduce the burden on the HRSA FOIA staff and process the requests quickly. Automating the manual process will save analysts time and effort and will help reduce/prevent human error. The FOIA staff reviews between 35-50 pages per hour, totaling 600 hours per month. | The expected benefits of the automated solution are: - Reducing the manual effort and the time that the FOIA staff spends on converting non-PDF electronic records to PDF - Identifying information to withhold/redact under one or more FOIA exemptions - Proposing the redaction markings for human QA - Reducing the size of the PDF | HRSA uses agentic AI to propose redactions and comments for potentially sensitive data elements for Freedom of Information Act (FOIA) staff that reviews grant documents. The proposed redactions by the system will include the exemption invoked for data elements that are deemed to be not publicly available. The comments will include a URL citing the source for data elements that are deemed to be publicly available. The output of the solution will be a PDF file including proposed redactions (and exemptions invoked) along with citation comments for review. FOIA staff will review the proposed redactions and add any additional data elements to be considered for redaction. The data elements added by FOIA staff will then be queried by a search engine to determine public availability and proposed as redactions (not available) or comments (available). FOIA staff will then review the accuracy of the proposed redactions and comments. This technology will help to alleviate the FOIA staffs workload and process the requests in a more expedited manner. | HRSA uses agentic AI to propose redactions and comments for potentially sensitive data elements for Freedom of Information Act (FOIA) staff that reviews grant documents. The proposed redactions by the system will include the exemption invoked for data elements that are deemed to be not publicly available. The comments will include a URL citing the source for data elements that are deemed to be publicly available. The output of the solution will be a PDF file including proposed redactions (and exemptions invoked) along with citation comments for review. FOIA staff will review the proposed redactions and add any additional data elements to be considered for redaction. The data elements added by FOIA staff will then be queried by a search engine to determine public availability and proposed as redactions (not available) or comments (available). FOIA staff will then review the accuracy of the proposed redactions and comments. This technology will help to alleviate the FOIA staffs workload and process the requests in a more expedited manner. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | HRSA Data Warehouse ChatBot | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | HRSA Data Warehouse Chatbot to respond to public Inquiries. | Increase transparency by making health data more accessible, actionable, and equitable enabling faster insights, smarter decisions, and broader community engagement | Help public to gain quick access to program data (e.g.; Area Health Resources Files, Find Healthcare services, Service Delivery Sites etc.) | Help public to gain quick access to program data (e.g.; Area Health Resources Files, Find Healthcare services, Service Delivery Sites etc.) | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Code Conversion for PowerBI Migration | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Generate code to support migration and conversion from Tableau to PowerBI to save license cost as well as migration labor costs | Save cost for development of the solution including eliminating contract labour costs | Code that is leveraged to migrate Tableau Reports/Dashboards into PowerBI | Code that is leveraged to migrate Tableau Reports/Dashboards into PowerBI | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | HRSA Fact Sheets | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Generate HRSAs Program Fact Sheets for public use and reduce the cycle time by eliminating manual review and validation process. | Public can immediately find clear, authoritative facts about HRSA programs, funding, workforce, outcomes. Also Strengthens grant proposals, community outreach, and health planning based on accurate, timely information | Provides HRSA Facts Sheets that are validated and can also embed additional comments so that public can get to understand this data and information better | Provides HRSA Facts Sheets that are validated and can also embed additional comments so that public can get to understand this data and information better | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Program Reporting System Knowledge base Chatbot | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Help users get relevant information quickly instead of in lengthy manuals, FAQs, or scattered documentation and reduce support cost. | Provides instant, accurate answers to user queries, reducing the time spent searching for information. Additionally, it reduces call center support costs and minimizes the number of calls from grantees seeking system help, improving operational efficiency. | AI chatbot integrated with post award performance reporting system which serves as a knowledge base for FAQs and system step by step guides. Both grantees and internal HRSA users will use the chatbot to get relevant information faster. The AI chatbot will eventually reduce customer support cost. | AI chatbot integrated with post award performance reporting system which serves as a knowledge base for FAQs and system step by step guides. Both grantees and internal HRSA users will use the chatbot to get relevant information faster. The AI chatbot will eventually reduce customer support cost. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Program Reporting System Natural Language Search | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Simplified Information availability for faster decision making and eliminate the maintenance cost for the legacy SOLRs search server. | Improved efficiency and saves time by presenting information in a simple, natural language format, helping grant program officers make faster, informed decisions and enhancing overall grant performance monitoring. | A simple natural language based global search functionality for post award monitoring | A simple natural language based global search functionality for post award monitoring | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | AI identification of "High-Risk" Health Centers | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Manual identification of high-risk health centers is resource-intensive and may miss key indicators across large datasets. Would support a proactive, data-driven approach to inform site visit schedules and technical assistance planning. | AI-driven risk identification would allow BPHC to better allocate resources, prioritize site visits, and provide tailored technical assistance. This will help improve compliance, operational performance, and ultimately the quality of care delivered by health centers | The system would use predictive analytics and risk modeling to generate a prioritized list of health centers considered "high-risk" based on predefined indicators (e.g., patient safety concerns, poor quality metrics, application anomalies). Outputs will support more targeted oversight and TA deployment schedules. | The system would use predictive analytics and risk modeling to generate a prioritized list of health centers considered "high-risk" based on predefined indicators (e.g., patient safety concerns, poor quality metrics, application anomalies). Outputs will support more targeted oversight and TA deployment schedules. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Targeted Technical Assistance | a) Pre-deployment The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | AI will help identify the health centers/ geographic areas that will benefit the most from targeted technical assistance on clinical topics for improving quality metrics | This will lead to better quality of care and improve health outcomes for patients. At the same time this will reduce costs and staff burnout and increase patient satisfaction with their care | Outputs will be: 1. List of health centers requiring TA on specific clinical topics (in alignment with MAHA) 2. More focused ROI by implementing the specific TA identified through the process | Outputs will be: 1. List of health centers requiring TA on specific clinical topics (in alignment with MAHA) 2. More focused ROI by implementing the specific TA identified through the process | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/HRSA | Program Compliance and Reporting Knowledge Base Chatbot | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Users often need to navigate multiple technical assistance webpages, manuals, documents, FAQs, or submit inquiries through the contact form to find answers. This can be time-consuming and increases time and support capacity. | Provides instant, accurate answers to user inquiries, reducing the time spent searching for information. It would also reduce call center and staff support costs and increase capacity to assist with more complex issues by minimizing calls and inquiries from grantees seeking publicly available information, improving operational efficiency. | AI chatbot integrated with programmatic requirements and post award performance reporting system which servers as a knowledge base for FAQs and system step by step guides. Both grantees and internal HRSA users will use the chatbot to get relevant information faster. The AI chatbot will eventually reduce customer support cost. | AI chatbot integrated with programmatic requirements and post award performance reporting system which servers as a knowledge base for FAQs and system step by step guides. Both grantees and internal HRSA users will use the chatbot to get relevant information faster. The AI chatbot will eventually reduce customer support cost. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NICHD RPAB AI/ML NICHD Relevance Model | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | b) Presumed high-impact but determined not high-impact | Not high-impact | The primary objective is to improve the efficiency, accuracy, and consistency of grant application referral assignments while streamlining the internal process for referring new applications. The AI-generated output supports subject matter experts by providing additional information that helps them make faster decisions and prioritize applications for review. All AI output is used solely as an assistive tool, and every referral decision undergoes 100% human review. Therefore, this use case does not meet the definition of a high-impact AI. | Classical/Predictive Machine Learning | The primary objective is to enhance the efficiency, accuracy, and consistency of grant application referral assignments, while reducing the burden on Subject Matter Experts in RPAB. The AI system is expected to streamline the process of internal referral of new grant applications. | This AI use case increases the efficiency of the grant referral process and ensures difficult applications are triaged in a quicker manner. | Results are presented as class predictions and class probabilities as recommendations for referral liaisons. | 25/04/2026 | b) Developed in-house | No | Results are presented as class predictions and class probabilities as recommendations for referral liaisons. | NIH IMPAC II funded and unfunded grant application data is used. Unstructured text from project abstract, specific aims, and title are encoded and vectorized for model training and inference. Fiscal year, activity code, and RCDC terms are transformed via one-hot encoding for use in model training and inference. PII related to individuals associated with the grant is kept intact to preserve the integrity of the use case of grant application referral and the trends of researchers' focus on particular scientific areas. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Applying for Grants Chat Bot | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Save time for applicants by guiding users through the application process step-by-step, recommending additional resources for grant writing, and helping determine eligibility. | It can guide users through the application process step-by-step, recommend additional resources for grant writing, and help determine eligibility to save time. | Input: Grants and Funding information/processes and FAQs for prospective grantees. Output: Targeted resources related to probing questions for end users. | Input: Grants and Funding information/processes and FAQs for prospective grantees. Output: Targeted resources related to probing questions for end users. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Assisted Referral Tool (ART) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This tool suggests which study sections an application might best fit for peer review. The applicant can then get a better idea of the topics being reviewed in that study section and the expertise the reviewers are likely to have. This tool is also used internally to help efficiently assign applications into study sections. | This tool provides information to applicants about the context of the eventual review of their applications and increases efficiency of internal study section assignments. | SRG recommendations | 15/01/2026 | b) Developed in-house | Yes | SRG recommendations | Previous grant applications submitted to NIH and assigned to the same study sections. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Autism Spectrum Disorder (ASD) Classification Model for Children using Deep Neural Network | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Automated approaches for table extraction | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Automated Basic-Applied Categorization of extramural grants | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This machine-learning algorithm uses information about NIMH-funded research projects to categorize them as basic or applied research per the federal definitions for each. | The algorithm is intended to be consistent in identifying basic and applied research, reduce burden of review by NIMH staff, and provide a complementary perspective to human review. | Categorization of research as basic or applied. | Categorization of research as basic or applied. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Biomedical Citation Selector (BmCS) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Time-consuming human review of individual journal articles from multidisciplinary journals to determine inclusion in MEDLINE. | More efficient and effective indexing and inclusion of relevant journal articles, standardization of citation record selection, and reduced processing time | Sets of citation records that are classified as relevant to biomedicine and the life sciences. | 23/01/2026 | b) Developed in-house | No | Sets of citation records that are classified as relevant to biomedicine and the life sciences. | PubMed citation data that was submitted by publishers and stored in the agency database was used. | No | k) None of the above | Yes | https://github.com/ncbi/biomedical-citation-selector | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Clinical Trial Predictor | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Applications that propose clinical trials that are submitted to NOFOs that do not allow clinical trials cannot be funded, no matter how well they do in review, because they were not reviewed using all appropriate clinical trial criteria. This application allows NIH to identify clinical trial applications submitted to NOFOs that do not allow clinical trials so they can be withdrawn before being reviewed and potentially transferred to a NOFO that does allow clinical trials. | The AI tool predicts whether grant applications may involve clinical trials based on the text of their titles, abstracts, narratives, specific aims, and research strategies. It is very difficult to deal with misclassified CTs that make to review on a CT not allowed FOA: no matter how good the score is, the IC cannot fund them. The CT prediction algorithm is used to help identify potential CTs on CT not allowed NOFOs, mainly the parent R01. | Input: IMPAC II application data, including titles, abstracts, narratives, specific aims, and research strategies. Output: Prediction of possible clinical trial submitted to a non-CT NOFO. | 23/05/2026 | b) Developed in-house | No | Input: IMPAC II application data, including titles, abstracts, narratives, specific aims, and research strategies. Output: Prediction of possible clinical trial submitted to a non-CT NOFO. | All data come from the internal NIH IMPAC II database. | No | k) None of the above | Yes | It is not publicly available. | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | ClinicalTrials.gov Protocol Registration and Results System Review Assistant | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The quality control review process at ClinicalTrials.gov is time- and resource-intensive. | Some potential benefits include increased efficiency, consistent reviews, resource optimization, and increased scalability. | Prediction of whether a quality issue is present in study registration or results records. | 23/08/2026 | c) Developed with both contracting and in-house resources | No | Prediction of whether a quality issue is present in study registration or results records. | ClinicalTrials.gov study record submissions | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Collections Summarization Chatbot | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Complex, time-consuming methods for discovery of content, and difficulty for users to understand the scope of content through summarization. | Improved discovery and understanding of content in NLM Digital Collections | Summary of the resource presented in text format. | Summary of the resource presented in text format. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | CSR Public Chatbot (CPC) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | This tool answers questions from the public, potential reviewers, and applicants by directly pulling relevant information from official government web pages, instead of searching FAQ lists. | Applicants and Reviewers can get their NIH grant applications and peer review questions answered quickly and efficiently | This tool recommends original source material that seems to answer the user's question, and allows the user to check the accuracy of the answer. | 22/01/2026 | b) Developed in-house | Yes | This tool recommends original source material that seems to answer the user's question, and allows the user to check the accuracy of the answer. | Applicant FAQs and publicly available content from public.csr.nih.gov website | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | DAIT AIDS-Related Research Solution | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | DAIT POs need to identify grant applications that involve AIDS-related research (ARR) so they can evaluate them for additional funding. | DAIT ARR suggests prioritization of grant applications that are likely to include AIDS-Related Research to assist POs in prioritizing which grants to select, which improved the review time and quality of review for ARR applications. | This application extracts text from grant applications as input, and then uses classification models to predict the priority and category of each grant application as the output. The output is shared along with other grant application metadata in a custom module. | 18/01/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | This application extracts text from grant applications as input, and then uses classification models to predict the priority and category of each grant application as the output. The output is shared along with other grant application metadata in a custom module. | A dataset was curated to train the model and is evaluated manually by user input. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Detecting Overlapping Science (DOS) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This tool detects applications that represent potential duplicate funding (funding the same research in different projects). It examines applications as they are submitted to the NIH and sends a report to relevant personnel in the agency. | Detect and prevent duplicate funding with real-time examination of incoming grant applications in a speedy manner. | This tool recommends a more careful examination of flagged applications to determine if an application is a duplicate of existing funding, in violation of NIH policy | 23/01/2026 | b) Developed in-house | Yes | This tool recommends a more careful examination of flagged applications to determine if an application is a duplicate of existing funding, in violation of NIH policy | No | k) None of the above | Yes | ||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Detection of Implementation Science focus within incoming grant applications | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This tool uses natural language processing and machine learning to calculate an Implementation Science score that is used to predict if a newly submitted grant application proposes to use science that can be categorized as "Implementation Science." | The AI report tool assigns the grant application to a particular division for routine grants management oversight and administration. | For inputs, it leverages NHLBI application text (title/abstract) and classification categories in Dimensions for NIH. For outputs, the report provides NHLBI application metadata (unchanged) and a score for relevancy to implementation science. | 20/01/2026 | a) Purchased from a vendor | Digital Science | Yes | For inputs, it leverages NHLBI application text (title/abstract) and classification categories in Dimensions for NIH. For outputs, the report provides NHLBI application metadata (unchanged) and a score for relevancy to implementation science. | Leverages NIH application data from IRDB. | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Expansion of Generative AI (GenAI) Caption Generation for all Collections Videos | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | High cost and low efficiency of video transcript and caption generation. | Improved standardization and accuracy of generated video captions | AI generated captions in text format | 20/11/2026 | b) Developed in-house | No | AI generated captions in text format | Audio extracted from U-Matic videos in MP4 format, hosted on collections.nlm.nih.gov | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Federal IT Acquisition Reform Act (FITARA) Tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Contracting officers use this tool to help identify whether a Statement of Work meets the criteria of Federal IT Acquisition Reform Act (FITARA). | Contracting Officers can use this tool to indicate if Statements of Work are likely to be IT-related, which saves significant manual effort and time required to identify relevant contracts. | User uploads a contract SOW and the FITARA Tool processes and predicts the likelihood on whether or not FITARA applies with a confidence score. The output data from the tool is displayed via a custom module. | 17/01/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | User uploads a contract SOW and the FITARA Tool processes and predicts the likelihood on whether or not FITARA applies with a confidence score. The output data from the tool is displayed via a custom module. | A dataset was curated using NIAID SOWs which were manually labelled to train the classification model. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Generative AI (GenAI) Still Image Tagging | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Other | Lack of a cost-effective mechanism to label and describe digital images. | Enhanced searchability and discoverability of images included in the NLM Digital Collections, increasing access to valuable medical and scientific resources and supporting research and health. | Classifications tags and/or image summary in text form | Classifications tags and/or image summary in text form | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | individual Functional Activity Composite Tool (inFACT) | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Internal Referral Module (IRM) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The output from this AI use case does not drive any agency decision, including the categories listed in section 6 of the memorandum. The output is a recommendation to assign a grant application to the appropriate Agency staff based on the scientific content of the application. The person to whom the grant application has been referred to can accept, reject, or reassign the application based on their expertise. | Classical/Predictive Machine Learning | Automated Assignment of Grants to Program Officers | The original IRM application grew out of a desire to refer applications to the appropriate Program Officer to manage the scientific research that fit their portfolio. This manual referral of grant applications still exists within IRM and has been complemented by use of AI/NLP capabilities. | The outputs are referrals to Program Officers, Program Class Codes, Organizational units - Divisions and Branches and Scientific Research Clusters. | 23/02/2026 | c) Developed with both contracting and in-house resources | Leidos and Highrise | Yes | The outputs are referrals to Program Officers, Program Class Codes, Organizational units - Divisions and Branches and Scientific Research Clusters. | We use eRA grant application data for all fine tuning and optimization for the models. Specially, we extract the title, abstract, specific aims and public health narrative to train our models for prediction. | No | k) None of the above | Yes | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | JIT Automated Calculator (JAC) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | NIGMS requires extra justification to fund investigators whose total grant money exceeds $1.5M. Applicants are required to submit lengthy forms detailing all of their support and those of their key personnel. These forms are quite long and require quite a bit of time to read and tally up funding totals. This NLP searches the entire form and adds up all of the totals to help NIGMS program staff determine total funding for all key personnel of a grant application. | At NIGMS we like to know how much total support an investigator has to ensure that we are not funding PIs who are already adequately resourced. However, JIT Other Support forms consist of many pages of freeform text in PDF format. Thus, it can be quite tedious for program officers (POs) to copy and paste information from these forms into a spreadsheet to determine how much funding a PI has. JAC can perform these calculations for POs automatically (assuming, of course, that the information has been entered correctly by the PIs). | Input: Grant application JIT Other Support Form PDFs from NIH IMPAC II database. Output: Editable spreadsheet of parsed data and funding summary data for each person in the Key Personnel tables of the application. | 23/05/2026 | b) Developed in-house | No | Input: Grant application JIT Other Support Form PDFs from NIH IMPAC II database. Output: Editable spreadsheet of parsed data and funding summary data for each person in the Key Personnel tables of the application. | All data come from the internal NIH IMPAC II database. | No | k) None of the above | Yes | It is not publicly available. | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | LLM Support for Admin Services | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Mapping Sequence Data to Research Outcomes | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Effectively allocating resources needed to process, manage, and store a high volume of sequence data. | More efficient operations and resource allocation | Text summaries identifying in a yes/no fashion if research products were identified | Text summaries identifying in a yes/no fashion if research products were identified | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Medical Text Indexer-NeXt Generation (MTIX) MEDLINE Indexing | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Time-consuming and burdensome manual indexing of MEDLINE citations | Cost-effective, timely, and efficient indexing of MEDLINE citation records. | A set of MeSH terms describing the article topic | 23/05/2026 | b) Developed in-house | No | A set of MeSH terms describing the article topic | The MTIX dataset is approximately 10 million PubMed MEDLINE citations published after 2006. It is publicly available data, used for training and evaluation of the MeSH terms predicted by the algorithm. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | MicroStrategy Evaluation | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NanCI: Connecting Scientists | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | NanCI phone/web application to connect scientists. The app uses AI to match scientific content to users' interests. By collecting papers into a folder, a user can engage the tool to find similar articles in the scientific literature and can refine the recommendations by up or down voting recommendations. Users can also connect with others via their interests and receive and make recommendations via this social network. Users: Cancer Research Trainees at NCI and across the USA. | Cancer research trainees indicate feeling overwhelmed by information and finding things of interest is a challenge. Furthermore they feel isolated. NanCI helps them home in on key content of interest and connect with others who share those interests. | User collects a series of papers by bookmarking them into a folder. AI then uses vector matching to find similar papers. | 23/03/2026 | a) Purchased from a vendor | Google; Barnacle | Yes | User collects a series of papers by bookmarking them into a folder. AI then uses vector matching to find similar papers. | PubMed; Onco Daily | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NBS Virtual Assistant | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Service Desk Operations | A robust knowledge base supports the NBSC user community, significantly reducing service requests to the ONBS Service Desk. This allows Subject Matter Experts (SMEs) to concentrate on critical operational and maintenance activities. | "Supporting all NBSC business workstreams, the ONBS Assistant will enhance user engagement with NBSC Applications by providing comprehensive and personalized assistance A variety of support document types like job aids, CBT training courses, FAQs and knowledge articles are the instructional documents used to teach the tool to answer common questions that come into the core team, and the Service Desk on a regular basis.ONBS Subject Matter Experts regularly update content ONBS Assistant will be hosted in the NIH Business System Cloud (NBSC) and positioned within the ONBS SharePoint portal" | 25/05/2026 | a) Purchased from a vendor | H2O.GPTe | Yes | "Supporting all NBSC business workstreams, the ONBS Assistant will enhance user engagement with NBSC Applications by providing comprehensive and personalized assistance A variety of support document types like job aids, CBT training courses, FAQs and knowledge articles are the instructional documents used to teach the tool to answer common questions that come into the core team, and the Service Desk on a regular basis.ONBS Subject Matter Experts regularly update content ONBS Assistant will be hosted in the NIH Business System Cloud (NBSC) and positioned within the ONBS SharePoint portal" | NBS Training guides, Job Aids, and FAQs. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NCI-DOE Collaboration, MOSSAIC project (Modeling Outcomes using Surveillance Data and Scalable AI for Cancer) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | MOSSAIC applies deep learning natural language processing (NLP) and foundation models to population-based cancer data collected by NCI's Surveillance, Epidemiology, and End Results (SEER) program. DOE's Oak Ridge National Lab (ORNL) has data use agreements (DUAs) with multiple SEER registries to access and train models using SEER data. Two APIs are in production use in the data management system used by the SEER registries -- OncoID which predicts whether a pathology report is related to cancer and OncoIE which extracts key tumor characteristics from unstructured pathology report text. Together these APIs are an important part of moving the US towards near real-time cancer incidence reporting. In addition, a third API OncoMetsID, which predicts whether a pathology report is indicative of metastatic disease, is in a pilot phase to use in conjunction with other sources of information in the registries to identify recurrent disease. | MOSSAIC enhances the infrastructure of the SEER cancer registries by providing tools that can increase the efficiency and accuracy of manual data abstraction by automatically extracting cancer surveillance data elements. SEER registries receive millions of unstructured clinical text documents that must be manually reviewed, leading to a lag in reporting of US cancer incidence trends. Automated tools such as those developed by MOSSAIC will help us achieve near real-time incidence trends and ultimately a more meaningful report card on the status of cancer in the US. | Input: unstructured (free text) cancer pathology reports. Output: varies depending on the algorithm but generally a predicted class (eg, tumor site) and associated relative confidence score that can be used to tune accuracy | 21/01/2026 | c) Developed with both contracting and in-house resources | development -- Oak Ridge National Lab; maintenance -- Information Management Services (IMS) | Yes | Input: unstructured (free text) cancer pathology reports. Output: varies depending on the algorithm but generally a predicted class (eg, tumor site) and associated relative confidence score that can be used to tune accuracy | Data is owned by the NCI SEER registries, which are funded by the NCI | No | k) None of the above | No | https://computational.cancer.gov/view-model-new?f%5B0%5D=project%3Amossaic&search_api_fulltext=&sort_by=title_1&sort_order=ASC&items_per_page=10, https://github.com/DOE-NCI-MOSSAIC | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | NHLBI Chat | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | This is a generic AI chat tool that provides a secure chat interface (similar to public tools like ChatGPT) for NHLBI Chat. This tool enables staff to safely and securely explore how generative AI can be used on their sensitive (but non-PII/PHI) workloads. | NHLBI Chat is a secure LLM tooling providing access to the Azure OpenAI API so that all NHLBI staff can explore generative AI for their day-to-day need. | The Azure OpenAI API accepts text as input and return text as output. Users enter text through a chat interface in a website. | 24/09/2026 | b) Developed in-house | Yes | The Azure OpenAI API accepts text as input and return text as output. Users enter text through a chat interface in a website. | No | k) None of the above | Yes | https://github.com/NHLBI/LLM_Chat_Interface | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NIAMS AI Chatbot Pilot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | To provide a secure, protected environment for NIAMS staff (OD, EP, IRP and IT) to explore, test, and understand how to use AI to be more efficient with a wide variety of administrative tasks such as general research, summarizing/querying documents, drafting emails, generating programming code, and creating presentation outlines?, etc. | The Azure-hosted NIAMS GenAI Chatbot helps employees be more efficient with a wide variety of administrative tasks such as summarizing/querying documents, drafting emails, and creating presentation outlines?, etc. | Input: natural text in the form of user questions, user uploaded documents. Output: Generated text in the form of answers to user questions, generated answers (summaries/queries) based on user documents. | 24/09/2026 | c) Developed with both contracting and in-house resources | Microsoft | No | Input: natural text in the form of user questions, user uploaded documents. Output: Generated text in the form of answers to user questions, generated answers (summaries/queries) based on user documents. | GPT 4.1 LLM | Yes | Not Publicly Available | k) None of the above | No | Not publicly available | Not Publicly Available | |||||||||||
| Department Of Health And Human Services | HHS/NIH | NICHD RPAB AI/ML Application Referral System | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The primary objective is to improve the efficiency, accuracy, and consistency of grant application referral assignments while streamlining the internal process for referring new applications. The AI-generated output supports subject matter experts by providing additional information that helps them make faster decisions and prioritize applications for review. All AI output is used solely as an assistive tool, and every referral decision undergoes 100% human review. Therefore, this use case does not meet the definition of a high-impact AI. | Classical/Predictive Machine Learning | The primary objective is to enhance the efficiency, accuracy, and consistency of grant application referral assignments, while reducing the burden on Subject Matter Experts in RPAB. The AI system is expected to streamline the process of internal referral of new grant applications. | This AI use case increases the efficiency of the grant referral process and reduces overlapping efforts in grant referral review. | Results are presented as class predictions and class probabilities as recommendations for branch assignment. | 24/08/2026 | b) Developed in-house | Yes | Results are presented as class predictions and class probabilities as recommendations for branch assignment. | NIH IMPAC II funded and unfunded grant application data is used. Unstructured text from project abstract, specific aims, and title are encoded and vectorized for model training and inference. Fiscal year, activity code, and RCDC terms are transformed via one-hot encoding for use in model training and inference. PII related to individuals associated with the grant is kept intact to preserve the integrity of the use case of grant application referral and the trends of researchers' focus on particular scientific areas. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NLP Automated Referral | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | First, the NLP Automated Referral tool produces non-binding recommendations regarding which NIGMS program officer is most appropriate to manage an incoming grant application, along with two alternative suggestions. Program officers must actively accept the assignment or refer the application to a more appropriate program officer, and they have full discretion to ignore or override the AI tools recommendations. The systems output is therefore not a principal basis for any legal or binding decision; it is one of several informational inputs to an internal workflow choice made by human staff. Second, all funding and programmatic decisions are made through established peer review and programmatic processes, governed by existing NIH/NIGMS policies and human judgment. Final funding decisions are made by the NIGMS Director in consultation with the NIGMS Advisory Council, the NIGMS Deputy Director, and the NIGMS Division Directors, not the individual program officers who manage the applications. As a result, the AIs output does not directly affect an individuals or organizations access to Federal funding or other critical government resources or services, nor does it alter anyones legal status or rights. Finally, the NLP Automated Referral tool does not fall into any of the categories of AI use cases identified in Section 6 of M-25-21 that are automatically designated high-impact. It is a routing tool used for internal staff portfolio management. | Classical/Predictive Machine Learning | Referring applications manually is tedious and time consuming. Using an automated approach allows staff to focus their time on more difficult tasks. | Automated referral allows NIGMS to retain institutional referral knowledge by training on historical data, eliminates delays in referral by assigning applications as soon as they come in, and reduces burden on staff members and allows them to allocate more of their time to other high value tasks. | Input: IMPAC II application data, including titles, abstracts, narratives and specific aims. Output: Top three most relevant ICs and POs. | 20/08/2026 | b) Developed in-house | No | Input: IMPAC II application data, including titles, abstracts, narratives and specific aims. Output: Top three most relevant ICs and POs. | All data come from the internal NIH IMPAC II database. | No | k) None of the above | Yes | It is not publicly available. | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | OCIO GenAI Advisor | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | OIT Help Desk Chatbot | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The OIT help desk would like to shorten resolution time for tickets, help empower users, generate a non-trivial response for a complex question in a desired format such as security, architecture, PMO policy, reports. | If used effectively, the OIT help desk can shorten resolution time for tickets, help empower users, generate a non-trivial response for a complex question in a desired format such as security, architecture, PMO policy, reports. | Publicly available help desk data, NIST policy in pdf format as user-provided data. Prompt and output are in natural language. | Publicly available help desk data, NIST policy in pdf format as user-provided data. Prompt and output are in natural language. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Person-level disambiguation for PubMed authors and NIH grant applicants | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A person or entity may use several name variations on publications and/or grants, which causes uncertainty in attributing research contributions | Correct attribution of grants, articles, and other products to individual researchers is critical for high quality person-level analysis. This improved method for disambiguation of authors on articles in PubMed and NIH grant applicants can inform data-driven decision making | Harmonized data | 23/02/2026 | c) Developed with both contracting and in-house resources | Lexical Intelligence, LLC | No | Harmonized data | Biomedical publications and preprints from PubMed and select publicly available preprint servers, grants titles, abstracts and biosketches from IMPACII, and ORCID data | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Portfolio Analysis Summarization Tool (PAST) | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Reduce effort for summarization of custom research portfolios | Rapid summarization of custom research portfolios which will be used to support program staff and others across the institute. | Input: Grants data from QVR. Output: Summaries of grant-related texts. | Input: Grants data from QVR. Output: Summaries of grant-related texts. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Program Classification Coding | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | To aid users, a solution has been developed that suggests the specific PCC that should be assigned to each application. | The Division staff is responsible for handling a large volume of applications during each council round, with the task of processing and assigning PCCs. To aid users, a solution has been developed that suggests the specific PCC that should be assigned to each application. | Model Inputs: Impac II data fields (Specific Aims, Project Title, and Study Section) and Out put is top 3 predicted PCCs. User views list of applications and top 3 suggestions by clicking on a report for the selected council date. Use can filter view by IC/OrgCode and Program Officers names. | Model Inputs: Impac II data fields (Specific Aims, Project Title, and Study Section) and Out put is top 3 predicted PCCs. User views list of applications and top 3 suggestions by clicking on a report for the selected council date. Use can filter view by IC/OrgCode and Program Officers names. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | RCDC AI Validation Tool | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Research Performance Progress Report (RPPR) Report Comparison | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Identification of duplication and overlap in grants year over year. | Research Performance Progress Reports (RPPR) are used by recipients to submit progress reports to NIH on their grant awards. AI can analyze such data to identify duplication and overlap in grants year over year. | Input: Grants data from QVR. Output: Identification of duplication and overlap in grants year over year . | Input: Grants data from QVR. Output: Identification of duplication and overlap in grants year over year . | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Scientific summaries tool | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Enhancing scientific summary development for communicating scientific achievement. | Within NIAID DIR we have team that drafts justifications for personnel actions based on the research being performed. This tool will be created to help them quickly and effectively prepare justifications for personnel actions for investigators in specific research fields. | Inputs: scientific publications, CV/Bib, BSC submissions and outcome memos, prior justifications, and, clinical protocols. Outputs: Scientific summary | Inputs: scientific publications, CV/Bib, BSC submissions and outcome memos, prior justifications, and, clinical protocols. Outputs: Scientific summary | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Similarity-based Application and Investigator Matching (SAIM) | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | SRDMS NLP COI | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Scientific review officers need to identify individuals who may pose a conflict-of-interest (COI) during the grant application review process. | SRDMS NLP COI is able to automate the identification of individuals who may pose a conflict-of-interest during the grant application review process, which saves significant time and effort by the SRO during application review and promotes consistency in identifying COIs. | Tool ingests grant application PDFs from an upstream source system, eRA, and these applications are processed to extract relevant name entities. The tool returns extracted name entities and metadata in a table that are displayed via custom module within the SRDMS application. | 19/01/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | Tool ingests grant application PDFs from an upstream source system, eRA, and these applications are processed to extract relevant name entities. The tool returns extracted name entities and metadata in a table that are displayed via custom module within the SRDMS application. | A labeled dataset of grant applications and associated conflicts of interest is used to calculate pipeline evaluation metrics. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Stem Cell Auto Coder | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | not publicly disclosed as an open government data asset | Study Section Clustering Tool (SSCT) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Efficiently organizing grant applications into appropriate study sections based on scientific similarity. | The Study Section Clustering Tool (SSCT) enhances the agencys mission by automating and streamlining the organization of grant applications into scientifically relevant study sections, improving efficiency and reducing manual effort. It ensures applications are reviewed by experts with appropriate expertise, adapting over time to changes in scientific fields through periodic model updates. This data-driven support leads to higher-quality peer review processes, promoting more effective research funding decisions | The AI systems outputs are lists of study sections grouped together based on the scientific similarity of grant application texts. Specifically, it generates clusters of applications that should be reviewed collectively because they share related scientific topics. These groupings serve as recommendations to subject matter experts, who use them to finalize the organization of study sections for peer review panels. | 23/01/2026 | b) Developed in-house | Yes | The AI systems outputs are lists of study sections grouped together based on the scientific similarity of grant application texts. Specifically, it generates clusters of applications that should be reviewed collectively because they share related scientific topics. These groupings serve as recommendations to subject matter experts, who use them to finalize the organization of study sections for peer review panels. | text of grant applications submitted to the Center for Scientific Review (CSR). | not publicly disclosed as an open government data asset | No | It does not have a standalone publicly available Privacy Impact Assessment (PIA). However, it operates within the Center for Scientific Review General Support System (CSR GSS), which is a FISMA-reportable system that has an associated PIA covering the overall system environment where the AI tool functions. | k) None of the above | Yes | NO | It does not have a standalone publicly available Privacy Impact Assessment (PIA). However, it operates within the Center for Scientific Review General Support System (CSR GSS), which is a FISMA-reportable system that has an associated PIA covering the overall system environment where the AI tool functions. | ||||||||||
| Department Of Health And Human Services | HHS/NIH | Synonymy prediction in the UMLS Metathesaurus | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | TB Case Browser Image Text Detection | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Without OCR technology, validation of text on images is manually intensive and, without proper controls, can create the risk of sensitive information coming into the system. | Provides additional protection against PII/PHI ingress into the TB Portals imaging dataset in a far more automated process. | User uploads a DICOM image, which is converted and passed to the AWS Rekognition service. Output is a JSON block with predictions on the location of text within an image. | 19/01/2026 | c) Developed with both contracting and in-house resources | Deloitte, Guidehouse, Research Data and Communication Technologies Corp. | Yes | User uploads a DICOM image, which is converted and passed to the AWS Rekognition service. Output is a JSON block with predictions on the location of text within an image. | Existing TB Portals images with and without text are used to evaluate performance. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Tool for PO Lookup Assignment (TPAL) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This tool helps NIGMS program staff determine the most appropriate SME at NIGMS and/or the most appropriate IC for a research proposal. | There are many occasions in which Division Directors, Branch Chiefs, and Program Officers wish to receive suggestions for the most appropriate people to talk to about a project proposal or where to send a proposal that might not be appropriate for NIGMS. | Input: Free form text in an online textbox. Output: Top three most relevant ICs and POs and their probabilities. | 20/07/2026 | b) Developed in-house | No | Input: Free form text in an online textbox. Output: Top three most relevant ICs and POs and their probabilities. | All data come from the internal NIH IMPAC II database. | No | k) None of the above | Yes | It is not publicly available. | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Transformative Research Award Anonymization Check (TRAAC) | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Machine learning pipeline for mining citations from full-text scientific articles | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Machine learning system to predict translational progress in biomedical research | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | AI Software for Conference/Workshop Summaries | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | CylanceProtect | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NIH Grants Virtual Assistant | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Provides easy access to grants information and policies. | Chat Bot to assist users in finding grant related information. | The inputs include information available on the NIH Grants and Funding site, such as FAQs. The outputs are answers to questions/prompts provided by the user. | 20/05/2026 | a) Purchased from a vendor | Yes | The inputs include information available on the NIH Grants and Funding site, such as FAQs. The outputs are answers to questions/prompts provided by the user. | Website data and manual. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Open AI SharePoint Document Assistant | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Splunk IT System Monitoring Software | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Aivia | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Provides machine-learning-based object classification in microscopy applications | Provides AI-based segmentation, enhancement, and prediction in microscopy applications. This tool is utilized in support of research projects within NIEHS DIR and DTT laboratories. | Inputs are microscopy images, outputs are object and structure annotations and enhanced images. | 23/01/2026 | a) Purchased from a vendor | Leica | No | Inputs are microscopy images, outputs are object and structure annotations and enhanced images. | Training data is generated by use microscopy user | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Alphafold | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Prediction of atomic models from CryoEM specimens. | This tool uses ML to build de novo atomic models for proteins based on amino acid sequence alone. This tool is utilized by the Cryo-EM core in support of research projects within NIEHS DIR and DTT laboratories. | Inputs are amino acid sequences, outputs are atomic models. | 23/01/2026 | a) Purchased from a vendor | Google DeepMind | No | Inputs are amino acid sequences, outputs are atomic models. | Models were trained using publicly available data. | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Automated approaches to analyzing scientific topics | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Slow and inefficient means by which decision makers are able to evaluate portfolios | Assist decision makers in analyzing topics in their portfolios | Recommendation | 23/02/2026 | c) Developed with both contracting and in-house resources | Lexical Intelligence, LLC | No | Recommendation | Biomedical publications and preprints from PubMed and select publicly available preprint servers, grants titles and abstracts from IMPACII, and patent data from USPTO | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | cryoDGRN | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Prediction of protein flexibility and variability | Software that is used to evaluate protein flexibility and variability in the dataset (https://github.com/zhonge/cryodrgn). This tool is utilized by the Cryo-EM core in support of research projects within NIEHS DIR and DTT laboratories. | Input is cryoEM imaging data, outputs are protein flexibility and variability. | 22/01/2026 | a) Purchased from a vendor | Open Source | No | Input is cryoEM imaging data, outputs are protein flexibility and variability. | Models were trained using publicly available data. | No | k) None of the above | No | https://github.com/zhonge/cryodrgn | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | crYOLO | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Automated screening of CryoEM specimens | This tool is an open-source machine learning-based particle picker (https://cryolo.readthedocs.io/en/stable/). This tool automatically picks targets based on its general model or an adapted model using small number of manually selected particles. It is utilized by the Cryo-EM core in support of research projects within NIEHS DIR and DTT laboratories. | Inputs are low magnification cryoEM imaging data and, optionally, manually selected targets; outputs are automatically selected imaging targets. | 21/01/2026 | a) Purchased from a vendor | Open Source | No | Inputs are low magnification cryoEM imaging data and, optionally, manually selected targets; outputs are automatically selected imaging targets. | Models were initially trained using publicly available data, further training may be performed using data set being analyzed. | No | k) None of the above | No | https://cryolo.readthedocs.io/en/stable/ | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | CryoSPARC | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Prediction of protein flexibility and variability | Proprietary software that is used for cryoEM data processing. Some steps in the workflow use ML to evaluate the protein flexibility and variability in the dataset (https://cryosparc.com/). This tool is utilized by the Cryo-EM core in support of research projects within NIEHS DIR and DTT laboratories. | Deep learning - stochastic gradient descent | 22/05/2026 | a) Purchased from a vendor | Structura Biotechnology | No | Deep learning - stochastic gradient descent | Training data owned by commercial software developer. | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | DeepEMhancer | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Other | Automated screening of CryoEM specimens | Open-source software that is used to obtain final "sharpened" cryoEM maps (https://github.com/rsanchezgarc/deepEMhancer). This algorithm uses a ML model to estimate the noise in the model and refine the local areas of the map and is utilized by the Cryo-EM core in support of research projects within NIEHS DIR and DTT laboratories. | Input is cryoEM imaging data, outputs are enhanced images. | 22/01/2026 | a) Purchased from a vendor | Open Source | No | Input is cryoEM imaging data, outputs are enhanced images. | Models were trained using publicly available data. | No | k) None of the above | No | https://github.com/rsanchezgarc/deepEMhancer | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | Dual Use Research of Concern (DURC) Categorization LLM | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | FIJI | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Provides machine-learning-based object classification in microscopy applications | Provides AI-based segmentation, enhancement, and prediction in microscopy applications. This tool is utilized in support of research projects within NIEHS DIR and DTT laboratories. | Inputs are microscopy images, outputs are object and structure annotations and enhanced images. | 23/01/2026 | a) Purchased from a vendor | Open Source | No | Inputs are microscopy images, outputs are object and structure annotations and enhanced images. | Training data is generated by use microscopy user | No | k) None of the above | No | https://github.com/juglab/labkit-ui | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | Identification of emerging areas | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identification of the rate of progress across scientific fields to inform data-driven decision making | Help decision makers to target new investments to topics with the greatest potential to accelerate scientific progress | The rate of progress across scientific fields | 23/02/2026 | c) Developed with both contracting and in-house resources | Lexical Intelligence, LLC | No | The rate of progress across scientific fields | Citation data for publicly available biomedical publications | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Imaris | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Provides machine-learning-based object classification in microscopy applications | Provides machine-learning-based object classification in microscopy applications. This tool is utilized in support of research projects within NIEHS DIR and DTT laboratories. | Inputs are microscopy images, outputs are object and structure annotations. | 23/01/2026 | a) Purchased from a vendor | Andor | No | Inputs are microscopy images, outputs are object and structure annotations. | Training data is generated by use microscopy user | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | ModelAngelo | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Prediction of atomic models from CryoEM specimens. | This tool uses ML to build atomic models in cryoEM maps, with or without amino acid sequence input. This tool is utilized by the Cryo-EM core in support of research projects within NIEHS DIR and DTT laboratories. | Inputs are cryoEM imaging data and optionally amino acid sequence, outputs are atomic models. | 22/01/2026 | a) Purchased from a vendor | Open Source | No | Inputs are cryoEM imaging data and optionally amino acid sequence, outputs are atomic models. | Models were trained using publicly available data. | No | k) None of the above | No | https://github.com/3dem/model-angelo | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | NIGMS Azure Open AI | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | There are a number of situations in which administrative activities can be augmented by generative AI, especially when classification of documents is needed but no training data exist. | We are using large language models (LLMs), in particular Open AI's models, for business process improvement. We have used these models for visualization of a grant portfolios as well as numerous classification problems: IC prediction, clinical trial prediction, research area prediction, etc. These models allow us to classify documents through simple prompt engineering rather than the laborious process of creating a custom training set from scratch. These models also allow us to reduce the number of applications that humans need to review from tens of thousands of applications to mere hundreds or fewer for a number of tasks. | Input: Text from various components of NIH grant applications. Output: Open AI chat completions (text) or text embeddings (vectors of numbers). | 24/09/2026 | b) Developed in-house | No | Input: Text from various components of NIH grant applications. Output: Open AI chat completions (text) or text embeddings (vectors of numbers). | All data used for this project is internal to NIH, mostly administrative data from the NIH IMPAC II database. | No | k) None of the above | Yes | It is not publicly available. | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Pangolin Lineage Classification of SARS-CoV-2 Genome Sequences | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Process a high volume of sequence data in real time to identify meaningful mutational patterns while minimizing the need for human effort. | Improved user retrieval of SARS-CoV-2 genome sequences based on classification and tracking of specific lineages, including those associated with mutations that may decrease effectiveness of therapeutics or protection provided by vaccination.? | Lineage classification identifiers for sequences | 21/04/2026 | a) Purchased from a vendor | http://cov-lineages.org; https://pangolin.cog-uk.io/ | No | Lineage classification identifiers for sequences | Publicly available SARS-CoV-2 sequence data from the GenBank resource was used to develop the tool | No | k) None of the above | No | https://github.com/cov-lineages/pango-designation | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | Prediction of protein 3D structures | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Prediction of transformative breakthroughs | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Slow and inefficient identification of topics likely to produce a scientific breakthrough | Predicting discoveries that are likely to be transformative breakthroughs in science can improve data-driven decision making | Prediction of discoveries that are likely to be transformative breakthroughs in science | 23/02/2026 | c) Developed with both contracting and in-house resources | Lexical Intelligence, LLC | No | Prediction of discoveries that are likely to be transformative breakthroughs in science | PubMed database | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Ptolemy | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Automated screening of CryoEM specimens | Algorithm used to find and classify areas in low magnification CryoEM images for imaging (https://github.com/SMLC-NYSBC/ptolemy). This tool is utilized by the Cryo-EM core in support of research projects within NIEHS DIR and DTT laboratories. | Inputs are low magnification cryoEM imaging data, outputs are specimen classifications. | 23/01/2026 | a) Purchased from a vendor | Open Source | No | Inputs are low magnification cryoEM imaging data, outputs are specimen classifications. | Models were trained using publicly available data. | No | k) None of the above | No | https://github.com/SMLC-NYSBC/ptolemy | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | Research Area Tracking Tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Analysis staff needed assistance identifying research areas associated with individual projects. | Within NIAID we have team that codes grants based on the research being proposed. They also prepare reports for high priority research areas. This tool was created to help them quickly identify projects that fall into a specific research field. | Grant title and abstract. Probability Estimates. | 20/01/2026 | c) Developed with both contracting and in-house resources | No | Grant title and abstract. Probability Estimates. | Grant title and abstract and coding. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Systematic investigation of the National Human Genome Research Institute History of Genomics and the Human Genome Project Archive | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | TB DEPOT (Tuberculosis Data Exploration Portal) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | There was a lack of deidentified, multidimensional tuberculosis socioeconomic, clinical, imaging, and pathogen genomic data available for researchers to use in developing models and learning more about TB cases. | Provide a no-code application for users to explore and analyze multidimensional TB Portals data. | User selects "cohorts" of tuberculosis cases from TB Portals containing structured clinical/socioeconomic, pathogen genomic, and imaging data for analysis as inputs. The outputs include confusion matrices, cohort comparisons, and visualizations like feature importance in the model. Outputs are available within the application and via an API. | 19/01/2026 | c) Developed with both contracting and in-house resources | Deloitte, Guidehouse, Research Data and Communication Technologies Corp. | Yes | User selects "cohorts" of tuberculosis cases from TB Portals containing structured clinical/socioeconomic, pathogen genomic, and imaging data for analysis as inputs. The outputs include confusion matrices, cohort comparisons, and visualizations like feature importance in the model. Outputs are available within the application and via an API. | TB Portals data is used to train, fine tune and evaluate performance of the model. | No | b) Sex | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | TB Portals Outlier Detection Lambda Function | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | The quality of chest X-ray (CXR) images uploaded by TB Portals Program partners varied significantly, and as the scale of images increased, NIAID needed a way to identify outliers in the imaging dataset. | Detect potential low-quality chest X Rays to flag as potentially being unsuitable for AI/ML training and flag for quality improvement. | Input: Dicom file. Output: classification of Outlier or not Outlier via the model. | 21/01/2026 | c) Developed with both contracting and in-house resources | Deloitte, Guidehouse, Research Data and Communication Technologies Corp. | Yes | Input: Dicom file. Output: classification of Outlier or not Outlier via the model. | Existing TB Portals images are used to train, fine tune and evaluate performance of the model. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Topaz | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Automated screening of CryoEM specimens | This tool is an open-source machine learning-based particle picker (https://github.com/tbepler/topaz). This tool automatically picks targets given a small number of manually selected particles. It is utilized by the Cryo-EM core in support of research projects within NIEHS DIR and DTT laboratories.. | Inputs are low magnification cryoEM imaging data and manually selected targets; outputs are automatically selected imaging targets. | 21/01/2026 | a) Purchased from a vendor | Open Source | No | Inputs are low magnification cryoEM imaging data and manually selected targets; outputs are automatically selected imaging targets. | Models were initially trained using publicly available data, further training is performed using data set being analyzed. | No | k) None of the above | No | https://github.com/tbepler/topaz | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | Protein Modeling with AlphaFold | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Generative AI | Protein modeling | Enormous leap forward in speed and accuracy in predicting protein folds and complexes. | The output is a series of protein structure files and related confidence/quality scores and metrics for each structure file. | 20/01/2026 | a) Purchased from a vendor | No | The output is a series of protein structure files and related confidence/quality scores and metrics for each structure file. | No | k) None of the above | No | https://deepmind.google/science/alphafold/ | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | eSlate Bot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | This tool answers questions using genAI by pulling information directly from the eSlate nomination packages. | CSR leadership can make quick decisions about slate approval and make any further improvements to slate review process | This tool outputs answers to questions about the quality of study section nomination slates of standing study section members, and if there are potential issues, allows the user to more closely examine the content of the nomination slate directly. | 25/01/2026 | b) Developed in-house | Yes | This tool outputs answers to questions about the quality of study section nomination slates of standing study section members, and if there are potential issues, allows the user to more closely examine the content of the nomination slate directly. | CSR slates by year, employee structures | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | SRO Handbook Bot | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This tool answers Scientific Review Officer (SRO) questions by retrieving relevant information from the handbook, saving the user time looking for answers to their questions. | The system will help SROs perform policy and handbook searches based on policy numbers and related keywords, using semantic understanding to improve search accuracy. | Provides summarized search results. | Provides summarized search results. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Detection/Identification of Reviewer Expertise and Grant Application Content | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is designed to detect and identify grant application content and reviewer expertise such as pediatric or other domains to support accurate matching and more efficient, informed decision-making | The tool is expected to enhance the agencys mission by improving the accuracy, fairness, and efficiency of processes such as reviewer assignment and grant application analysis. By automating the detection of relevant content and expertise, it reduces manual workload, ensures better alignment between reviewers and applications, and supports more informed decision-making. This leads to more equitable and effective funding outcomes, ultimately benefiting the general public through improved support for research and programs that address critical needs. | The AI systems outputs are classifications or labels indicating whether a grant application involves specific content areas (e.g., pediatric or other domains) and assessments of reviewer expertise based on biosketches, publications, and related data. These outputs are used to support accurate matching between applications and qualified reviewers. | 25/07/2026 | b) Developed in-house | Yes | The AI systems outputs are classifications or labels indicating whether a grant application involves specific content areas (e.g., pediatric or other domains) and assessments of reviewer expertise based on biosketches, publications, and related data. These outputs are used to support accurate matching between applications and qualified reviewers. | This AI use case does not involve training or fine-tuning models. Instead, it uses predefined rules and prompts to analyze existing grant application texts and reviewer information to identify relevant content and expertise. Evaluation is based on validating the accuracy of these prompt-based classifications against known examples. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Detection of AI-Generated Reviewer Critiques | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This tool detects if a given written critique was generated by AI tools, such as large language models (LLMs), as the reviewers are not allowed to write their critiques using these AI assistants. Flagged critiques are examined by NIH staff and discussed with the reviewers. | NIH policy mandates that reviewers produce independent analyses of grant applications based on their expertise and knowledge. The use of genAI to produce a written critique of a grant application is in violation of NIH policy and fails to provide an independent assessment of the application. | This tool outputs classifications of whether the reviewer critique was likely produced by Generative AI or not. | 20/01/2026 | b) Developed in-house | Yes | This tool outputs classifications of whether the reviewer critique was likely produced by Generative AI or not. | critiques written by the reviewers, AI-generated critiques, Amazon book reviews, and AI-generated book reviews | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | MirBot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI tool addresses the challenge of efficiently and consistently processing incoming study PDFs by automatically extracting key information needed for grant review workflows. | The AI tool helps maintain continuity in the grant review process by ensuring consistent extraction and presentation of key study information, even as experienced staff retire or transition. This reduces the training burden on new team members and preserves institutional knowledge through standardized workflows. | The system produces an indexed and preprocessed version of each submitted PDF, enabling context-aware question-answering by the AI. A predefined set of questions is automatically asked to extract relevant information from the document, which is then used to populate the grant prior approval review form details for the associated study. | 24/10/2026 | c) Developed with both contracting and in-house resources | Technatomy Axle | No | The system produces an indexed and preprocessed version of each submitted PDF, enabling context-aware question-answering by the AI. A predefined set of questions is automatically asked to extract relevant information from the document, which is then used to populate the grant prior approval review form details for the associated study. | The AI was trained with existing study PDFs to help the AI properly identify the structure of the PDF files. No PHI from the studies were in these documents, just details about the study and request details. | Yes | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Notebooks Hub | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Helping scientists and researchers code their applications, dashboards, and data analyses more quickly, have higher code standards, and gain more insight from their data | More time can be spent on the subject matter exploration and understanding than on coding and analysis frameworks that help/support these goals. Current assessments indicate 40% speed increases to develop code and applications, with improvements increasing year-over-year | Python, R, Javascript, Java, etc. code | 25/08/2026 | c) Developed with both contracting and in-house resources | Axle Informatics, Microsoft, OpenAI | No | Python, R, Javascript, Java, etc. code | NA - Commercial or open source models used. Not trained in house. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Ask Aithena | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Helps researchers stay on top of the latest research in their field and quickly get up to speed in new fields by providing conversational answers while also providing verifiable references for the user to read primary sources. | Increased productivity of researchers due to more time spent doing science and less time researching about other people's science | Text blurbs answering the user's questions and the references used to answer those questions. | 23/07/2026 | c) Developed with both contracting and in-house resources | Axle Informatics, Microsoft, OpenAI | No | Text blurbs answering the user's questions and the references used to answer those questions. | NA - Commercial or open source models used. Not trained in house. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Writing code using AI | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Limited users able to develop code. | Assistance with novice users writing code to achieve data transformations. | recommended pyspark code | 24/06/2026 | a) Purchased from a vendor | Palantir Technologies deployed application within Foundry | Yes | recommended pyspark code | no agency data provided | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | ChIRP - A ChatGPT Model for the NIH Intramural Community | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Generative AI | We hoped to load text into the chatbot to assist in summarization and identification of prominent themes. ChIRP was moderately helpful for our use case, primarily because we used it at a stage in development where documents could not be directly uploaded into the chatbot | Chatbots with the ability to scan, summarize, and compare document text could likely assist researchers working with large qualitative datasets. | We hoped to use ChIRP to thematically analyze text. | 25/02/2026 | b) Developed in-house | Yes | We hoped to use ChIRP to thematically analyze text. | unknown | No | unknown | k) None of the above | Yes | Not available | unknown | ||||||||||||
| Department Of Health And Human Services | HHS/NIH | Software Approval Agent | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | All software requests currently go to the IT Service Desk, who then need to research the approval status, respond to the end user, and forward the request to other offices in order to process, respond, and route the request to the ISSO or Administrative staff. The agent will help route software requests directly to the correct parties based on the approval status of the requested software. | Helps staff follow standard policies & procedures thereby improving operational efficiencies. | AI system provides user with the status of requested software (Approved for General Use, Approved but Requires Purchase, Approved for Special Use Only, Not Approved, Not Found in Catalog) and then generates a IT service request routed to the correct recipient(s) with the information necessary to process the request. | AI system provides user with the status of requested software (Approved for General Use, Approved but Requires Purchase, Approved for Special Use Only, Not Approved, Not Found in Catalog) and then generates a IT service request routed to the correct recipient(s) with the information necessary to process the request. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | LibreChat | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The NHGRI-internal generative chat and image capabilities of LibreChat will provide an alternative to using publicly available chat services that may expose NIH data. Prompt data is stored within NHGRI's system boundaries and will not be used to train public models. | It will allow NHGRI staff to utilize chat and image generative AI services using available LLMs from various CSPs through a single interface without exposing NHGRI data to train public models. | AI system outputs are in response to user prompts. The LLMs utilized have the ability to recommend (based on prompted preferences), formulate content, and inform decisions based on the details of the prompts. | AI system outputs are in response to user prompts. The LLMs utilized have the ability to recommend (based on prompted preferences), formulate content, and inform decisions based on the details of the prompts. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Ethics AI Agent | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | Users repeatedly ask Ethics staff the same type of question, which causes challenges for Ethics staff to manage their tasks effectively. | Helps staff follow standard policies & procedures thereby improving operational efficiencies. | The AI system outputs recommendations based on the Ethics knowledgebase, with links to source references. In addition, if staff choose to consult with Ethics staff, it sends a request to the Ethics team with the conversation history included. | The AI system outputs recommendations based on the Ethics knowledgebase, with links to source references. In addition, if staff choose to consult with Ethics staff, it sends a request to the Ethics team with the conversation history included. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Help Desk AI Assistant | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | Help Desk staff need to write new responses to IT service desk requests and inquiries. The agent will provide Help Desk staff with ideal language when responding to tickets. | Help IT staff communicate with end user in a clear, consistent message to improve customer service quality and efficiencies. | The AI agent provides responses to IT service desk tickets that staff can copy & paste into ServiceNow fields. | The AI agent provides responses to IT service desk tickets that staff can copy & paste into ServiceNow fields. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Data Management and Sharing Plans - Assistant Review Tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | This tool uses NLP to identify the areas in the grant application that deal with the Data Management and Sharing Plan (DMSP). It is designed to assist the reviewer in determining whether the application meets the NIH Data Management and Sharing requirement. | This tool improves productivity and efficiency by streamlining the process and pre-processing the DMS plan against a checklist. It assists the reviewer in their task. The reviewer makes the final determination based on the results and the text of the application. | The tool provides answers for the DMS plan PO checklist. | 24/10/2026 | b) Developed in-house | No | The tool provides answers for the DMS plan PO checklist. | Leverages NIH application data from IRDB. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | ITAC SOP Chat | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | The AI provides natural language searching of ITAC SOP, policy, and as-built documents allowing users to easily locate knowledge and easily view the document section that includes the knowledge. | The AI will increase operational efficiency by allowing ITAC users to more easily search and locate knowledge from SOP, policy, and As-Built documents. | The AI outputs natural language responses based on knowledge from ITAC SOP, policy, and As-Built documents. It also outputs citations with a built-in reader to allow users to view document sections where the knowledge was found. | 25/07/2026 | b) Developed in-house | No | The AI outputs natural language responses based on knowledge from ITAC SOP, policy, and As-Built documents. It also outputs citations with a built-in reader to allow users to view document sections where the knowledge was found. | The AI uses a retrieval augmented generation architecture to retrieve relevant document sections and ground the generative responses. Documents are indexed with cognitive search and stored in blob storage. | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | BDC Website Search | a) Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI provides natural language searching of BDC website information, making it easier for users to find relevant information. | The AI will provide a more user friendly and efficient information search system. | The AI outputs natural language search responses using BDC website information. The AI also outputs citations for retrieved information. | The AI outputs natural language search responses using BDC website information. The AI also outputs citations for retrieved information. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | BioData Catalyst Harmonized Data Model | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | Harmonizing complex scientific concepts in retrospectively collected data. | The AI will help lower the barrier to use and improve the quality of biomedical data for research to improve public health outcomes. | Harmonized, AI-ready data. | Harmonized, AI-ready data. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NHLBI Chat Workflow - Data Management and Sharing Plan | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | Assists program officers in reviewing grant applications for a specific query in the checklist (DMSP). | Improved efficiency in workflow for Program officers | Recommendations | 25/06/2026 | b) Developed in-house | Yes | Recommendations | N/A. We use commercial models available via Microsoft Azure, through an NIH STRIDES environment. | No | k) None of the above | Yes | https://github.com/NHLBI/LLM_Chat_Interface | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NHLBI Chat Workflow - Foreign Component | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | Assists program officers in reviewing grant applications for a specific query in the checklist (Foreign Component). | Improved efficiency in workflow for Program officers | Recommendation | 25/06/2026 | b) Developed in-house | Yes | Recommendation | N/A. We use commercial models available via Microsoft Azure, through an NIH STRIDES environment. | No | k) None of the above | Yes | https://github.com/NHLBI/LLM_Chat_Interface | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Merops | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Automate and expedite copyediting of scientific manuscripts | More efficient and cost-effective source of copy-editing manuscripts. | Recommendation | 25/07/2026 | a) Purchased from a vendor | Shabash | No | Recommendation | The software is proprietary and cannot be trained; it is, however, highly customizable by the end user (e.g., NIAAA staff) to accommodate the journals specific style preferences. | Yes | k) None of the above | No | https://shabash.net/merops/ | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | OSPIDA RPAB Scientific Coding Assistance Tool (CAT) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | OSPIDA's Referral Program and Analysis Branch (RPAB) is responsible for scientific coding assignments of grant applications. They assign scientific codes to grants based on different objectives, selecting from a list of over 3,000 scientific codes related to NIAID's research areas. | RPAB's Scientific CAT will significantly save time spent on the initial manual assignment of scientific codes and promote consistency in coding across applications. | The models output scientific code predictions by objective, which are displayed in the SCORS application used by RPAB to review grants. | 25/01/2026 | c) Developed with both contracting and in-house resources | Deloitte | Yes | The models output scientific code predictions by objective, which are displayed in the SCORS application used by RPAB to review grants. | Multiple years of grant data and RPAB scientific coding assignment data were used to train the NLP models, a different set of grant data and RPAB sicnetific coding assignment data were used to evaluate model metrics from the trained NLP models. | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | AWS Exscribo | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | Typical transcription services struggle to handle scientific and biomedical terminology, and many platforms do not include the ability to pretrain for specific words, or offer the ability to perform queries on that text in the transcription system itself. | The AWS Exscribo application is specifically built to allow for custom vocabularies, editing of meeting transcriptions, and model retraining, making it well-suited to handle complex medical terminologies. It also has the benefit of using AWS Bedrock, which enables Gen AI prompting on the transcription. | The AI systems output includes transcriptions of audio recordings, and outputs of results from Gen AI prompts on those transcriptions. | 25/01/2026 | c) Developed with both contracting and in-house resources | Deloitte, AWS | No | The AI systems output includes transcriptions of audio recordings, and outputs of results from Gen AI prompts on those transcriptions. | Custom Vocabularies of words or acronyms that are important to conversations, and any edits that are made to the output of the transcription that are used for training. | No | k) None of the above | Yes | https://github.com/aws-samples/sample-scientific-meeting-transcription | |||||||||||||
| Department Of Health And Human Services | HHS/NIH | Virtual Assistant for the NIAMS Grant Management Applications | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | New investigators and staff often struggle to locate policies, workflows, and troubleshooting steps buried across SharePoint guides, SOP documents, and in?app help screens for the two NIAMS Grants Management applications. In addition, support tickets and emails consume significant SME staff's time. | Streamlined Application Support Operations: The AI-powered virtual assistant can handle user questions and inquiries, reducing the burden and support time spent by grant SMEs and the IT application development team. It provides self-service options for users to resolve issues independently. | Input : NIAMS Grant Management Training material and Job aids such as SOPs, user guides, and job aids. Output : Responses to end user questions with citation from user guides, training materials, job aids, and SOPs. | Input : NIAMS Grant Management Training material and Job aids such as SOPs, user guides, and job aids. Output : Responses to end user questions with citation from user guides, training materials, job aids, and SOPs. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Grant award process efficiency | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | Reducing time burden of repetitive tasks related to the grants award process | Improvement of operational efficiency for making grant awards | Highlight and summary of information found in grant-related documents | 25/01/2026 | c) Developed with both contracting and in-house resources | No | Highlight and summary of information found in grant-related documents | Grant-related documents | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Coding translation | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Translates code between programming languages, e.g. rewrite python script in equivalent R commands; translate natural language to code, e.g. write a python script that will convert this gene matrix to a transcript matrix etc. | Saves time in troubleshooting programming code or rewriting code in a different language | Output is computer code in designated programming language. | Output is computer code in designated programming language. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Assist in research on the association between hearing loss and dementia for the NIDCD EARssentials hot topic presentation | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Assist in deepening my understanding the statistical methodologies applied in research examining the association between hearing loss and dementia. | Improving understanding of statistical methods in population research of hearing loss and dementia. | Explanation of the concepts and methodologies used in population-based study. | Explanation of the concepts and methodologies used in population-based study. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | AI-generated podcast describing recent advances in methods for analysis of RNA-seq data | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Genomic analysis is a rapidly evolving field with advances in analytical methods arising frequently. This makes it difficult to stay on top of new techniques. | Improved understanding of new genomic analytical methods | An AI-generated podcast episode synthesizing information from several recent publications highlighting new methods for genomic analysis | An AI-generated podcast episode synthesizing information from several recent publications highlighting new methods for genomic analysis | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | AI-Enhanced Journal Clubs | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | A lack of engagement, efficiency, and productivity at journal clubs due to the burden of gathering topically relevant and high quality journal articles for review. | Make journal clubs more engaging and informative with highly relevant articles recommended by the AI. | Recommended journal articles | Recommended journal articles | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Grants Portfolio Analysis | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Facilitate scientific program management and grant portfolio assessment | Better monitoring of grants for alignment with current policies and regulations. | Summaries of sections of grant applications and progress reports. Categorization of grants and applications. | Summaries of sections of grant applications and progress reports. Categorization of grants and applications. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Identifying pain vs non-pain research | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Identifying pain research studies is complicated because of the ubiquity of pain in language. NIH program staff working in the pain research field need to be able to distinguish pain research from opioid research, both of which get classified as "Pain Research" by RCDC. This usually requires significant investment of staff time, however, since a large enough training dataset has been developed we are trying to train various ML algorithms to be able to identify grants that are truly researching pain compared to grants that merely mention pain in their text. | It is expected that this will decrease the amount of staff time needed to curate a portfolio as starting point of analyses and that it will speed up the time it takes to carryout an analysis. | Expected output is a file with a list of grants that were identified as pain related. | Expected output is a file with a list of grants that were identified as pain related. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | HEAL Portfolio Topic Analysis | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The HEAL Initiative supports hundreds of pain research studies. It is meant to support pain research that cannot be carried out by an individual IC. One challenge we have is describing what HEAL research is. This project is an attempt to give program staff a starting point and visualization that describes the topics covered by HEAL research so that staff can start to understand the HEAL portfolio and how the different programs in HEAL are related or different. It will also allow staff to be able to explain how the HEAL portfolio is different from portfolios of different ICs. It may be possible for staff to carry out this analysis but using a ML algorithm would require a fraction of the staff time needed for this and be more consistent. While it may be possible for staff to classify the topics in the relatively small HEAL portfolio without AI, having a ML method in place will allow a single staff member to compare the HEAL portfolio to the much larger NIH portfolio using consistent methods. | This will allow us to describe various portfolios that consist of 100s or 1000s of grants without requiring the time of dozens of staff members. The output will allow staff members to communicate summaries of their porfolios consistently and clearly. | Network diagram showing different topics of research in the HEAL portfolio, how "related" these topics are to eachother and how many grants fall into each topic. | Network diagram showing different topics of research in the HEAL portfolio, how "related" these topics are to eachother and how many grants fall into each topic. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Determining how federal pain resarch has responded to the Federal Pain Research Strategy. | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The IPRCC has asked NIH to evaluate if all FPRS priorities are being addressed by NIH research. The task would require staff to categorize over 6000 grants into up to 13 FPRS priority areas. A single staff member can curate 5 grants per hour using these parameters, and agreement among staff members is approximately 30%. Therefore, we are carrying out this project using ML to be able to complete the project in a timely manner and without the need of significant staff time investment. It will identify areas of research that require staff to investigate further, instead of having staff curate the whole portfolio. | Allow staff to complete the analysis requested by the IPRCC without the need for grant by grant curation of the entire Federal Pain Research Portfolio | Expected outpus is a csv file that lists all Federally funded pain research grants and assigns them probabliltiies of addressing each of the 13 FPRS prioritites. | Expected outpus is a csv file that lists all Federally funded pain research grants and assigns them probabliltiies of addressing each of the 13 FPRS prioritites. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Transformer-Based Metadata Alignment Workflow | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Inconsistent data elements and differing definitions in glossaries of metadata structures across research data ecosystems hinder interoperability and FAIR-aligned reuse. | Speed improvements in metadata harmonization across ecosystems, enabling more discoverable, reusable, and interoperable datasets to support secondary research, cross-program analysis, and interdisciplinary biomedical discovery. Enhances readiness for large-scale AI/ML applications by providing scalable semantic alignment capabilities and strengthening metadata infrastructure. | Two parallel outputs: (1) Ranked variable pairs using semantic similarity scores generated by transformer-based embeddings (MiniLM, MPNet); (2) GPT-based similarity scores with accompanying natural language justifications derived from semantic evaluation of metadata descriptions. | Two parallel outputs: (1) Ranked variable pairs using semantic similarity scores generated by transformer-based embeddings (MiniLM, MPNet); (2) GPT-based similarity scores with accompanying natural language justifications derived from semantic evaluation of metadata descriptions. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | LLM-Assisted Referral Justification Email Generator | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Referral justifications are often time-consuming to draft manually and require consistent interpretation of referral guidelines and application content. | Reduces Program Officer workload by generating structured draft justifications grounded in referral guidelines. Promotes consistency, transparency, and standardization in referral workflows while improving efficiency and decision traceability. Supports broader adoption of AI in operational decision support with human-in-the-loop oversight. | Structured draft referral justifications that cite relevant referral guidelines and assess alignment between guideline content and the applications title, abstract, and specific aims. | Structured draft referral justifications that cite relevant referral guidelines and assess alignment between guideline content and the applications title, abstract, and specific aims. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | AI Assist | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This solution is designed to address the challenges of manually processing and analyzing large volumes of unstructured dataincluding documents, reports, and policieswhich is often time-consuming, labor-intensive, and prone to human error. | The implementation of an AI-powered analytics solution will streamline data processing, reduce human error, and enable faster, more accurate decision-making at NINDS. This will enhance operational efficiency, strengthen compliance, and allow staff to focus on higher-value tasks. | Text summary, document comparison, keyword extraction, assistance with writing, filling out forms, creating presentations, spreadsheets | Text summary, document comparison, keyword extraction, assistance with writing, filling out forms, creating presentations, spreadsheets | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Portfolio analysis and grant summarizing program | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Extracts specific information from hundreds of grant applications in a consistent reproducable way | This AI program automates the process of analyzing text from a few to many hundreds of grants using a custom prompt and an Azure OpenAI models | It dynamically identifies columns containing text for analysis, intelligently groups related data, and sends it to the AI for processing, one grant at a time and ensures reproducible output. The results are then saved back to a new Excel file, providing a clear and auditable trail of the analysis. This tool allows for AI assisted portfolio analysis. | It dynamically identifies columns containing text for analysis, intelligently groups related data, and sends it to the AI for processing, one grant at a time and ensures reproducible output. The results are then saved back to a new Excel file, providing a clear and auditable trail of the analysis. This tool allows for AI assisted portfolio analysis. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | RAG System For Travel Planning | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The chatbot answers questions from the users based on the user guide and potentially categorizing the IT helpdesk tickets. | Instant answers to common travel questions and reduce hours spent manually searching through the user guide | Recommendations to users on filling out travel documentation. | Recommendations to users on filling out travel documentation. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Generating Metadata for Web Archive Resources | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Time-intensive process needed to generate accurate and descriptive metadata about web resources | More consistent, reliable, accurate and publicly available information about resources | Draft metadata about archived web resources. | Draft metadata about archived web resources. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Incorporating External Information into Taxonomy | a) Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Time consuming and inefficient process of reading journal articles and manually copying new organism names from those articles into internal databases | Reduced time spent manually reading journal articles to find novel organism names and manually entering those names into internal database | Spreadsheet of new organism names to be reviewed by taxonomy curators | Spreadsheet of new organism names to be reviewed by taxonomy curators | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Using Llama to summarize PubMed Central (PMC) full text articles that contain information on protein function | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Inefficient and manual process to assign accurate function to proteins and protein models | This will increase efficiency of curators work in providing up-to-date functional annotation for prokaryotic protein family models for use in the annotation pipeline and in adding this to RefSeq proteins. | Article summaries | 25/04/2026 | b) Developed in-house | No | Article summaries | PMC data | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Leveraging NLP and LLMs to identify and characterize NIH prevention research via 160-topic taxonomy | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Health & Medical | Pilot | c) Not high-impact | Not high-impact | Generative AI | Semi-automate manual curation | More timely assessments of NIH spending in specific areas within prevention research. | Classification of grants by health conditions, risk factors, study designs, and prevention research type. | 25/04/2026 | c) Developed with both contracting and in-house resources | Microsoft Azure, Westat | Yes | Classification of grants by health conditions, risk factors, study designs, and prevention research type. | Publicly available grant information (ApplID, Grant Number, Title, Abstract, Public Health Relevance). | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Driving Efficiency and Expansion of Dietary Supplement Label Database Data through AI | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Enhance database QA and enhance database records acquisition rate | More rapid sourcing of dietary supplement label data and enhanced quality assurance of database records | Increased number of monthly sourced labels from the current rate of 1500/month and increased accuracy of database record data fields | Increased number of monthly sourced labels from the current rate of 1500/month and increased accuracy of database record data fields | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | AI-enabled User Support and Impact Monitoring | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | User support for DSLD data inquiries and monitoring of DSLD usage and impact in the research community. | Support AI adoption in the federal workforce, offer new ways to address the complex analytical needs of DSLDs super-user community by allowing for a deeper exploration of the data than the standard DSLD web interface can provide, and provides the opportunity to evaluate the AI tool and to make incremental improvements by fine-tuning the model and interface. | Chatbot-style interface | Chatbot-style interface | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Section 508 Azure OpenAI Chatbot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | This tool will streamline access to compliance resources, guidance, and support, empowering NIH staff and stakeholders to meet federal accessibility requirements. Designed to efficiently handle help desk inquiries, the chatbot will adhere to NIHs security protocols and provide accurate, role-specific information. | This tool will streamline access to compliance resources, guidance, and support, empowering NIH staff and stakeholders to meet federal accessibility requirements. Designed to efficiently handle help desk inquiries, the chatbot will adhere to NIHs security protocols and provide accurate, role-specific information. | Recommendations to Section 508 resources and guidance | 25/06/2026 | c) Developed with both contracting and in-house resources | Summome | No | Recommendations to Section 508 resources and guidance | Currently using Azure OpenAI's training model and MSFT CoPilot, but planned development to use knowledge repository through RAG | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | Enhancing the RCDC with Generative AI | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Generative AI | The goal is to assess whether generative AI can enhance the RCDC process by minimizing time-consuming, resource-intensive routine and manual tasks and improving overall efficiency. | Gen AI is expected to save time, reduce manual workload, and improve productivity by automating resource-intensive processes. These efficiencies support the agency's mission by enabling better resource allocation and enhanced categorization of NIH research. | The Azure cloud platform provides secure access to OpenAIs ChatGPT model, which is connected to search indexes pre-loaded with relevant datasets. These datasets include internal agency-provided data, such as meeting transcripts, and publicly available data from NIH RePORTER. ChatGPT responds to prompts to execute various tasks such as summarizing meeting transcripts and notes, and the scientific content within curated sets of grant applications. ChatGPT is also utilized to recommend an appropriate RCDC category for a grant application and provide explanations for its recommendations. Additionally, ChatGPT is prompted to predict semantic types for thesaurus concepts, identify hierarchical relationships between concepts, cluster similar concepts, and suggest synonyms for specified terms. | 24/05/2026 | a) Purchased from a vendor | Microsoft | No | The Azure cloud platform provides secure access to OpenAIs ChatGPT model, which is connected to search indexes pre-loaded with relevant datasets. These datasets include internal agency-provided data, such as meeting transcripts, and publicly available data from NIH RePORTER. ChatGPT responds to prompts to execute various tasks such as summarizing meeting transcripts and notes, and the scientific content within curated sets of grant applications. ChatGPT is also utilized to recommend an appropriate RCDC category for a grant application and provide explanations for its recommendations. Additionally, ChatGPT is prompted to predict semantic types for thesaurus concepts, identify hierarchical relationships between concepts, cluster similar concepts, and suggest synonyms for specified terms. | Text data from meeting transcripts and notes, and publicly available grant data from NIH RePORTER database. The data is indexed using Azure Cognitive Search AI and securely stored in an Azure cloud storage container. A Retrieval-Augmented Generation (RAG) chatbot is employed to retrieve the indexed data, and prompts are developed to effectively query the data and generate responses using ChatGPT. LLMs respond in a manner that can cite specific language in the data sources, allowing subject matter experts to validate LLM-generated outputs. | No | k) None of the above | No | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | NIH Travel Policy AI Chatbot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | The AI-powered chatbot is designed to address a key operational challenge: the high volume of routine travel policy inquiries directed toward NIH administrative staff. These questions, often repetitive and time-consuming, divert valuable human resources from more complex and strategic responsibilities.Specifically, the AI chatbot leverages Generative Artificial Intelligence (AI) to provide accurate, consistent, and real-time responses to questions related to the Federal Travel Regulation (FTR) and the NIH Travel Policy Handbook. By doing so, it significantly reduces the dependency on staff to manually research and craft responses to standard inquiries. In addition, the chatbot serves as the foundation for a self-service portal for the NIH community. This portal empowers employees and stakeholders to independently access authoritative travel policy guidance 24/7improving efficiency, enhancing user experience, and ensuring policy compliance across the organization. | The Office of Financial Management (OFM) envisions this AI-powered chatbot as a critical support tool for NIH staff during the upcoming transition to a self-service travel planning model, aligned with the government-wide shift to GSAs ETS Next travel system, recently named GO.gov. With this transition slated to begin in 2026, the chatbot will serve as an intelligent, always-available assistant that simplifies the travel planning process for thousands of NIH employees. Enhanced Operational Efficiency.Improved User Experience for NIH Staff.Support for a Modern, Self-Service Government.Increased Compliance and Accuracy.Indirect cost benefit to public. Up to 6080% reduction in inquiry volume handled by human agents. 160 FTE hours saved per month at the central NIH Travel office. 1000 FTE hours saved per month across the IC Community Travel offices. Drastic reductions in average response times, often from days or hours down to seconds. | The chatbot outputs text-based, interactive responses tailored to helping NIH staff plan and manage official travel in compliance with policy. These outputs are designed to be helpful, policy-compliant, user-specific, and non-decisional, serving as a productivity aid rather than an authority for travel approval. | 25/08/2026 | c) Developed with both contracting and in-house resources | Infer Solutions | No | The chatbot outputs text-based, interactive responses tailored to helping NIH staff plan and manage official travel in compliance with policy. These outputs are designed to be helpful, policy-compliant, user-specific, and non-decisional, serving as a productivity aid rather than an authority for travel approval. | Federal Travel Regulation and NIH Travel Policy Handbook documents | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/NIH | PowerAutomate Delinquent Submissions | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Other | Senior Auditors are required to spend significant timeapproximately 25% of an FTE per auditor annuallyon manually monitoring award recipients submission status, contacting recipients, providing oversight, and documenting compliance at multiple intervals leading up to the fiscal year end. | By automating these processes, the AI will enable auditors to redirect roughly 25% of their time from low-value administrative tasks to high-skill activities that generate measurable impact, increasing efficiency, strengthening oversight, and sustaining our divisions annual ROI of approximately 600%. This will ultimately return more funds to NIH from award recipients, directly supporting the agencys mission and benefiting the public through more effective use of federal resources. | Each award recipient that has an audit requirement was built into a SharePoint list including company name, CAGE Code, FYE, CYE, auditor assigned, and oversight (Grants Management, contracting officer, etc). Multiple layers of stacked Power Automate check today's date versus FYE and CYE (for Final incurred cost submissions and Provisionals, respectively) and send email notification to the vendor 6 months before a submission is due. The automation copies the auditor and other government oversight, enters the date the communication was delivered. A second level program runs daily checking until we are 3 months away from the due date and then a similar process with a different email and instructions are delivered to the award recipient. Finally, another layer of Power Automate calculates when a submission is delinquent and notifies the company of the implications of late submission with auditor and Government Oversight on copy. | 25/07/2026 | b) Developed in-house | No | Each award recipient that has an audit requirement was built into a SharePoint list including company name, CAGE Code, FYE, CYE, auditor assigned, and oversight (Grants Management, contracting officer, etc). Multiple layers of stacked Power Automate check today's date versus FYE and CYE (for Final incurred cost submissions and Provisionals, respectively) and send email notification to the vendor 6 months before a submission is due. The automation copies the auditor and other government oversight, enters the date the communication was delivered. A second level program runs daily checking until we are 3 months away from the due date and then a similar process with a different email and instructions are delivered to the award recipient. Finally, another layer of Power Automate calculates when a submission is delinquent and notifies the company of the implications of late submission with auditor and Government Oversight on copy. | Data regarding NIH DFAS audited entities over prior year assignments. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/OASH/OIDP | ABE (AI-driven. Beneficial. Efficient.) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | By centralizing federal HIV program information in a custom Knowledge Base, ABE addresses the challenges of fragmented data and inefficient workflows, supporting operational efficiencies through a reduction in labor force and contracts, and providing the ability to quickly access, summarize, analyze, and create resources. | ABE is expected to deliver key benefits by streamlining operations, reducing costs, and improving data-driven decision-making related to HIV/AIDS programs. ABE will serve as a mechanism to expose other federal partners to the usage of AI within a protected knowledge base, increasing work output efficiencies and fostering federal partnerships through expanded Knowledge Base assets and usage. | generative AI content, images, summarizations, and analysis based on federally approved assets. Knowledge Base is in compliance with Gender Ideology and Preventing Woke AI Executive Orders | 24/01/2026 | c) Developed with both contracting and in-house resources | ICF, DataSurge | No | generative AI content, images, summarizations, and analysis based on federally approved assets. Knowledge Base is in compliance with Gender Ideology and Preventing Woke AI Executive Orders | No | k) None of the above | Yes | |||||||||||||||
| Department Of Health And Human Services | HHS/ACF/OA | Sub Can Line Finder | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | The LLM helps compile an aggregated report of costs by intaking a specific CAN, category, sub CAN line item description, and a projection cost from the user (either by a CSV or manual entry) and returning line items from the open-closed report and requisition purchase order report related to the sub CAN line item description such as supplier name, document number, document type, and total obligations for that sub CAN line item. | The Sub CAN Line finder LLM helps find related obligations, line items from various reports and returns information crucial to a budget officer when creating a spend plan for their office. The LLM aids ACF Discover track various line items and pulls the information into one place: the Spend Plan module where ACF can track yearly budgets and monitor budget health. | The LLM helps compile an aggregated report of costs by intaking a specific CAN, category, sub CAN line item description, and a projection cost from the user (either by a CSV or manual entry) and returning line items from the open-closed report and requisition purchase order report related to the sub CAN line item description such as supplier name, document number, document type, and total obligations for that sub CAN line item. | 24/10/2026 | a) Purchased from a vendor | Palantir Technologies | Yes | The LLM helps compile an aggregated report of costs by intaking a specific CAN, category, sub CAN line item description, and a projection cost from the user (either by a CSV or manual entry) and returning line items from the open-closed report and requisition purchase order report related to the sub CAN line item description such as supplier name, document number, document type, and total obligations for that sub CAN line item. | N/A- using an integration of OpenAI as our model | Yes | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/OCIO | https://healthdata.gov | AI Harvest Service | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | The Harvest Service addresses inconsistent or incomplete metadata across distributed HHS open data sources by normalizing descriptions, creating concise summaries, and tagging data assets for improved searchability | Enhanced discoverability of HHS data from distributed agency-owned systems; improved user accessibility of health- and human-related data; faster identification of relevant data assets for researchers, policymakers, and the public; and increased value of open data investments. | AI-generated concise data asset descriptions and standardized metadata tags integrated into HHS Data Hub asset records; improved open data records surfaced on HealthData.gov for public consumption. | 25/07/2026 | a) Purchased from a vendor | Tyler Technologies | Yes | AI-generated concise data asset descriptions and standardized metadata tags integrated into HHS Data Hub asset records; improved open data records surfaced on HealthData.gov for public consumption. | Open metadata and data asset descriptions from multiple HHS OpDiv/StaffDiv open data portals. No PII is used. | https://healthdata.gov | No | k) None of the above | No | ||||||||||||
| Department Of Health And Human Services | HHS/OCIO | Federal Assistant AI Agent | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Agentic AI | HHS and its Family of Agencies have a diverse, spread out basis of content that can be confusing to navigate and requires a user to have knowledge of HHS org structure to be able to obtain answers to their questions; leveraging Agentic AI using Turnkey app can reduce burden on citizens to find answers to their questions about HHS' services, and can reduce the volume of calls to HHS agency contact center representatives to answer common questions | Allows end-users to leverage a single point of entry and plain language to find information across all HHS and its family of agency websites, allowing only authoritative government sources, to provide answers to questions in a conversational manner without requiring the user to be able to navigate and understand a complex web of content and bureaucracy. Reduction in costs for contact centers via lower call volume and decreased time spent per calls. | Plain language, conversational responses to customer inquiries via chatbot, providing sources for information as needed to increase accuracy of information and direct end-users to relevant resources. | Plain language, conversational responses to customer inquiries via chatbot, providing sources for information as needed to increase accuracy of information and direct end-users to relevant resources. | ||||||||||||||||||||||
| Department Of Health And Human Services | HHS/OS/ASFR | Similar Opportunities (KNN) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Agencies do not receive the highest number of proposals from capable applicants as applications generally tend to come from the same set of applicants. | Increase the quality and capability of applicants submitting grant proposals to federal agencies. | Match of agency grant requirements to competent applicants | 24/10/2026 | a) Purchased from a vendor | MicroHealth | Yes | Match of agency grant requirements to competent applicants | Grants.gov public website | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/OS/ASFR | Applicant Help Chatbot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Answer user questions about Grants.gov | Faster answers to questions at a lower cost | System help documentation and funding opportunity listings | 19/05/2026 | a) Purchased from a vendor | Business Performance Systems | Yes | System help documentation and funding opportunity listings | Grants.gov public website | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/ASFR/OG | GrantSolutions Text Analyzer Tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Program staff authoring NOFOs need capabilities to simplify language and ensure NOFOs remain compliant with the Plain Writing Act of 2010. | Automates repetitive workflows, reduces manual effort, delivers actionable insights, and accelerates accurate, transparent decisions across the grant lifecycle. | Displays Flesch-Kincaid grade level and an overall readability score of NOFO text. | 23/07/2026 | c) Developed with both contracting and in-house resources | Internal Federal Shared Service (GrantSolutions) | Yes | Displays Flesch-Kincaid grade level and an overall readability score of NOFO text. | Historical grant documents | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/ASFR/OG | GrantSolutions AI Writing Assistant | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Generative AI | Program staff authoring NOFOs need capabilities to simplify language and ensure NOFOs remain compliant with the Plain Writing Act of 2010. | Automates repetitive workflows, reduces manual effort, delivers actionable insights, and accelerates accurate, transparent decisions across the grant lifecycle. | Generates a simplified rewrite of the selected text and presents a side-by-side comparison with the original. | 25/05/2026 | c) Developed with both contracting and in-house resources | Internal Federal Shared Service (GrantSolutions) | Yes | Generates a simplified rewrite of the selected text and presents a side-by-side comparison with the original. | Historical grant documents | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/ASFR/OG | GrantSolutions Recipient Risk Tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Grant managers need an efficient method to conduct risk assessments before issuing financial assistance awards to prevent fraud, waste, and abuse. | Automates repetitive workflows, reduces manual effort, delivers actionable insights, and accelerates accurate, transparent decisions across the grant lifecycle. | Generates a risk score for each prospective recipient and lists the top contributing factors (e.g. prior findings) and data sources. | 19/03/2026 | c) Developed with both contracting and in-house resources | Internal Federal Shared Service (GrantSolutions) | Yes | Generates a risk score for each prospective recipient and lists the top contributing factors (e.g. prior findings) and data sources. | Historical grant documents | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/ASFR/OG | GrantSolutions Non?Competing Continuation Approval Tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Grant managers need an efficient workflow to identify and analyze differences in non-competing continuation budgets and narratives from year to year. | Automates repetitive workflows, reduces manual effort, delivers actionable insights, and accelerates accurate, transparent decisions across the grant lifecycle. | Produces Non?Competing Continuation eligibility recommendations for grants staff to act on. | 21/12/2026 | c) Developed with both contracting and in-house resources | Internal Federal Shared Service (GrantSolutions) | Yes | Produces Non?Competing Continuation eligibility recommendations for grants staff to act on. | Historical grant documents | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/ASFR/OG | GrantSolutions Helpdesk Agent | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Generative AI | GrantSolutions users need an efficient method to get answers to their login and access-related questions. | Automates repetitive workflows, reduces manual effort, delivers actionable insights, and accelerates accurate, transparent decisions across the grant lifecycle. | Provides information and helpful guidance to resolve common account related questions. | 25/07/2026 | c) Developed with both contracting and in-house resources | Internal Federal Shared Service (GrantSolutions) | Yes | Provides information and helpful guidance to resolve common account related questions. | Historical grant documents | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/ASFR/OG | GrantSolutions Non?Competing Continuation Review Tool | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Service Delivery | Pilot | c) Not high-impact | Not high-impact | Generative AI | Grant managers need an efficient workflow to identify and analyze differences in non-competing continuation budgets and narratives from year to year and ensure narratives promote federal priorities and mission. | Automates repetitive workflows, reduces manual effort, delivers actionable insights, and accelerates accurate, transparent decisions across the grant lifecycle. | Flags potential compliance risks and delivers a structured summary that feeds reporting dashboards and enables secure sharing with other authorized systems. | 25/07/2026 | c) Developed with both contracting and in-house resources | Internal Federal Shared Service (GrantSolutions) | Yes | Flags potential compliance risks and delivers a structured summary that feeds reporting dashboards and enables secure sharing with other authorized systems. | Historical grant documents | No | k) None of the above | Yes | ||||||||||||||
| Department Of Health And Human Services | HHS/OCR | ChatGPT | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | c) Not high-impact | Not high-impact | Generative AI | Staffing shortage | More efficient investigations. | ChatGPT is used to break down complex legal concepts in plain language and identify patterns in court rulins impacting Medicaid services. | 25/07/2026 | a) Purchased from a vendor | Open AI | No | ChatGPT is used to break down complex legal concepts in plain language and identify patterns in court rulins impacting Medicaid services. | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/OCR | CoPilot Outlook | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | c) Not high-impact | Not high-impact | Generative AI | Staffing shortage | Faster correspondence with public. | Previously used to revised emails | 25/08/2026 | a) Purchased from a vendor | Westlaw | No | Previously used to revised emails | No | k) None of the above | No | |||||||||||||||
| Department Of Health And Human Services | HHS/SAMHSA | DECIDE | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/SAMHSA | Document drafting and editing | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/SAMHSA | Document summarization | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of Health And Human Services | HHS/SAMHSA | Not required to disclose | AWS Kendra Search tool | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Kendra address the technical limitations of efficient access to public information. | The SAMHSA STORE houses a tremendous wealth of mission-critical information for public consumption and utilization. Kendra provides an efficient and comprehensive way for the public to access this information. | A comprehensive scan of SAMHSA STORE materials based on a public query in an efficient and effective User Experience. | 25/06/2026 | a) Purchased from a vendor | AWS | No | A comprehensive scan of SAMHSA STORE materials based on a public query in an efficient and effective User Experience. | Not required to disclose | No | k) None of the above | Yes | Not open source (part of AWS Suite of products) | ||||||||||||
| Department Of Homeland Security | CBP | DHS-2705 | Smartphone Information Forensics Triage | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Quickly translate and summarize the content of text messages, prompting the inspecting employee to review the actual raw text if warranted. | Save worker time by providing an alternative to lengthy bitwise forensic device inspections through the application of an expedient triage tool; reduce the number of higher-level inspections. | Translations and summary are shown only on the display/monitor, and are not saved or transmitted. | Translations and summary are shown only on the display/monitor, and are not saved or transmitted. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-313 | Advanced Analytics for X-ray Images (AAXI) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Computer Vision | Reduction of low risk empty commercial vehicles. | AAXI aims to address the problem of anomaly detection in empty commercial vehicles entering the United States at land border ports of entry. The AI models achieve this goal by encoding past X-Ray images of vehicular border crossings in a semantically meaningful way and comparing the current crossing to detect differences amongst the images to identify anomalies. Benefits include enhancement of the capability of humans to consistently detect items of interest/concern present (and possibly concealed) in vehicles crossing into the United States, and increased clearance rate at border crossings so that vehicles operating safely and lawfully may pass through the border faster. | AAXI compares current crossing images to previous crossings of the same tractor/trailer which have been adjudicated by a CBP Officer and determines recommends further review if warranted. | AAXI compares current crossing images to previous crossings of the same tractor/trailer which have been adjudicated by a CBP Officer and determines recommends further review if warranted. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-314 | Advance RPM Maintenance Operating Reporter (ARMOR) | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | Advance Radiation Portal Monitor (RPM) Maintenance Operating Reporter (ARMOR) project provides predictive maintenance of RPMs, detecting issues with the equipment before the issue causes the screening lane to be inoperable | ARMOR will shorten time to service/repair/maintenance of radiation portal monitors by two weeks. ARMOR will allow better distribution of resources (travel, spare parts, etc.) and expected cost decrease could be 25-50%. Through decreased outage time, and prediction of equipment degradation, ARMOR will increase radiological/nuclear (R/N) security on US borders. | The system will provide a listing of malfunctioning RPMs categorized by issue severity and predicted date of failure. The outputs will be used to create service tickets. | The system will provide a listing of malfunctioning RPMs categorized by issue severity and predicted date of failure. The outputs will be used to create service tickets. | |||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2721 | AI Resume & ATS App | a) Pre-deployment – The use case is in a development or acquisition status. | Human Resources | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Speed and scale: During disasters, resume volume spikes and manual review can’t keep up; Format variability: Resumes arrive as PDFs, Word files, text, and scans, making uniform processing difficult; Consistency: Different reviewers apply criteria differently, leading to uneven shortlists and missed candidates; Traceability and defensibility: Hiring choices must be fast, explainable, and audit ready; Searchability: Teams need to quickly find candidates with specific qualifications (for example, 5+ years of EMS experience); Centralization: Candidate information is scattered across files and events, slowing coordination and handoffs. | How AI helps: Reads every format: Scans and reads PDFs, Word files, text, and images so all resumes can be processed; Extracts key details: Pulls out skills, certifications, education, locations, roles, dates, and years of experience from free form text; Understands the job: Reads job descriptions, separates must have and nice to have qualifications and applies appropriate weights; Matches and scores: Compares each resume to the job and calculates a clear match score; Ranks candidates: Sorts candidates by score to produce a prioritized shortlist; Highlights strengths and weaknesses: Summarizes where a candidate aligns well and where they fall short; Flags critical gaps: Calls out missing must have requirements (for example, licenses, certifications, clearances, or minimum years); Explains results: Shows the evidence behind each recommendation, what matched and what didn’t; Conversational search (optional): Lets HR ask plain language questions about the candidate pool (for example, “Show EMS candidates with 5+ years in Region 2”); Human oversight: Routes sensitive or low confidence cases to HR/SMEs for review before moving forward. Benefits: Faster: Processes resumes in seconds and handles very large volumes during surge events; More consistent: Applies the same criteria to every resume, reducing variation across reviewers; Better decisions: Ranked lists with clear reasons improve triage and interview selection; More efficient: Cuts manual screening time so staff can focus on final se-lection and onboarding; Transparent and reviewable: Captures inputs, scores, explanations, and reviewer actions to support audits and continuous improvement. | Structured candidate profiles: Standard fields (skills, certifications, education, location, years of experience) enable fair comparison and precise search/filtering; Ranked lists with match scores: Orders candidates by fit to required and preferred qualifications for fast, defensible shortlisting; Clear explanations: Shows the specific evidence behind each score, including matched items and gaps, to support transparent decisions; Gap/mismatch flags: Highlights missing or insufficient requirements to speed triage and targeted follow up; Dashboards and exportable reports: Filters (for example, location, availability, qualifications) that help HR slice results and coordinate next steps; Optional human in the loop checks: Configurable SME/HR validation for high impact roles or edge cases, maintaining human control over outcomes. | Structured candidate profiles: Standard fields (skills, certifications, education, location, years of experience) enable fair comparison and precise search/filtering; Ranked lists with match scores: Orders candidates by fit to required and preferred qualifications for fast, defensible shortlisting; Clear explanations: Shows the specific evidence behind each score, including matched items and gaps, to support transparent decisions; Gap/mismatch flags: Highlights missing or insufficient requirements to speed triage and targeted follow up; Dashboards and exportable reports: Filters (for example, location, availability, qualifications) that help HR slice results and coordinate next steps; Optional human in the loop checks: Configurable SME/HR validation for high impact roles or edge cases, maintaining human control over outcomes. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2749 | Real-Time Language Translation Services | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | The AI is intended to solve the problem of ICE personnel often lacking timely access to interpreters in offline or low-connectivity environments, making it difficult to communicate with individuals who speak little or no English during operations. | These tools help reduce delays caused by language barriers in the field and support clearer two-way communication during routine, special, and emergency operations. They reduce reliance on ad hoc workarounds when interpreters are not immediately available and allow personnel to focus more on mission activities rather than the logistics of basic communication. | The planned platforms and mobile applications use AI translation models to convert spoken or written language between English and other languages in near real time. Personnel can speak or type into the tool, which then provides translated text or audio to support two-way conversations during field operations, interviews, and removal processes. The tools are designed to function in offline or low-connectivity environments where possible, recognizing the challenging conditions in which ICE personnel often operate. | The planned platforms and mobile applications use AI translation models to convert spoken or written language between English and other languages in near real time. Personnel can speak or type into the tool, which then provides translated text or audio to support two-way conversations during field operations, interviews, and removal processes. The tools are designed to function in offline or low-connectivity environments where possible, recognizing the challenging conditions in which ICE personnel often operate. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-131 | Automated Target Recognition (ATR) Developments for Standard Screening | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | a) High-impact | High-impact | Computer Vision | AIT systems need to use Automated Target Recognition algorithms to detect objects while maintaining passenger privacy. | The purpose of this use case is to improve upon Automated Target Recognition (ATR) algorithms used to reduce privacy concerns because a TSO is no longer required to view Advanced Imaging Technology (AIT) images. The expected benefits are to increase detection, reduce false alarms, and improve efficiency and passenger experience. | The system reproduces the threat location, which is viewed as a bounding box on a representative human figure, for TSO resolution. | The system reproduces the threat location, which is viewed as a bounding box on a representative human figure, for TSO resolution. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-132 | Accessible Property Screening (APS) Checkpoint CT Prohibited Items (PI) Detection | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | a) High-impact | High-impact | Computer Vision | The AI solution is needed to look for and classify non-explosive prohibited items, because this AI solution alongside the legacy explosive algorithms provides a complete solution for TSA. AI models can process vast amounts of data in real-time, identify anomalies, and provide a way to quickly evolve in identifying new threats with a speed and accuracy that humans cannot match. Once this AI solution is tested in the lab and in the field, TSA will have the capability of using Image on Alarm Only to enable TSOs to bolster accuracy while focusing on human-centered priority tasks. | This AI helps airplane luggage checks by the TSA officers scanning bags to have a continuous always watching partner to alert them to anything suspicious. Currently, a Transportation Security Officer (TSO) who is assigned to every X-ray equipment at an airport checkpoint visually inspects each image. This officer resolves the system generated explosive alarms as well as visually inspecting the image for the presence of non-explosive prohibited items such as guns and sharp objects (see TSA Travel site). TSA is working on developing new Artificial Intelligence/Machine Learning (AI/ML) algorithms to automate the search for the non-explosive prohibited items (e.g. guns, knives, etc.). Once a threat is found, the algorithm displays bounding boxes around the suspect item for the operator to then investigate and adjudicate. These AI solutions benefit the public by providing a consistent and uninterrupted level of threat detection as an added layer of security. The ML algorithms allow the TSA officers to be more flexible and to better prioritize their attention on important items to improve security. | AI system output is a set of 3-dimentional bounding boxes that is displayed on the X-ray image. The bounding boxes are placed on top of objects or areas where the algorithm believes it has found a prohibited item (threat object). | AI system output is a set of 3-dimentional bounding boxes that is displayed on the X-ray image. The bounding boxes are placed on top of objects or areas where the algorithm believes it has found a prohibited item (threat object). | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-133 | Walk-Through Metal Detector (WTMD) Alternative Automated Target Recognition (ATR) Developments | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | a) High-impact | High-impact | Computer Vision | Improve detection over traditional WTMD to include non-metallic threats. | Artificial intelligence (AI)-enhanced Millimeter wave (mmWave) detectors are used as an alternative to Walk-Through Metal Detectors (WTMDs) for passenger screening to detect both metallic and non-metallic threats and prohibited items on passengers at the security checkpoint. These detectors will provide both increased security and a better passenger experience. | The AI outputs target coordinates to the operator viewing station which is viewed as a bounding box on a representative human figure. | The AI outputs target coordinates to the operator viewing station which is viewed as a bounding box on a representative human figure. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-134 | Synthetic data for improved Automated Threat Recognition (ATR) in checkpoint screening | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | a) High-impact | High-impact | Generative AI | Images are used to train Automated Threat Recognition (ATR) and other AI models and systems to detect prohibited items in the screening processes. | To create synthetic data that can be used to improve Automated Threat Recognition (ATR) algorithm development. Synthetic data can be quicker to produce which will improve effectiveness by addressing and adapting to new threats quicker. Accessible Property Screening (APS) and On-Person Screening (OPS) are working with vendors and evaluating AI-based synthetic data generation techniques to bolster the pool of training data available to develop machine learning algorithms in ATR applications. | Images that mimic the human body and various threats. | Images that mimic the human body and various threats. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2399 | Electronic Evidence/Video Recording Transcription and Summarization Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Generative AI | This will cut investigative processing time significantly, while also increasing accuracy. | The Transportation Security Administration (TSA) uses body-worn cameras that incorporate artificial intelligence (AI) technologies as a part of the underlying software which transcribes and translates video footage. The technology provides rapid access to Law Enforcement Officer (LEO) and Investigative data, through transcription. This will cut investigative processing time significantly, while also increasing accuracy. | The AI will transcript audio/video data and provide a printable artifact. The AI only provides recommendations the final usable data is reviewed and certified by TSA staff. | The AI will transcript audio/video data and provide a printable artifact. The AI only provides recommendations the final usable data is reviewed and certified by TSA staff. | |||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-2305 | USCIS Document Translation Service | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | a) High-impact | High-impact | Generative AI | Reduce the man hours needed for translating evidence documents by leveraging AI document translation technology | Integrate AI and GenAI models trained on relevant subject matter (e.g., immigration law, visa/immigration applications, family/adoption certificates, and other sourced processing materials) to provide fast, accurate translation of written or other digital documents in various languages. Real-time Interpretation: Utilizing AI and GenAI-powered speech-to-speech, speech-to-text translation tools for efficient communication, consultations, and other interactions within DHS. Across all DHS components the need to support language translation and transcription is crucial for operations and adjudications. | The service delivers an image-to-image translation that is displayed side by side with the original document to aid officer in reviewing the evidence and preparing for the interview. | The service delivers an image-to-image translation that is displayed side by side with the original document to aid officer in reviewing the evidence and preparing for the interview. | |||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-2514 | USCIS Speech Translation Service | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Provide efficient communication, consultations, and other live applicant interactions in USCIS offices. | Reduce the manpower needed to verbally communicate with applicants and ensure that they are directed in an accurate and efficient manner. | Provides speech to speech and speech to text translation in multiple languages through government-issued iPads. | Provides speech to speech and speech to text translation in multiple languages through government-issued iPads. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-194 | AI Enabled Autonomous Underwater Vehicle | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Computer Vision application onboard the ROV detects items of interest (IoI). The ROV also uses AI for collision avoidance. Once an IOI is detected, a bounding box is placed around the suspected IoI in the image and the user is notified of an alert of a potential IOI. The user then reviews the image and decides if a dive team is required for further inspection. The AI output does not serve as a principal basis for any decisions or actions. | Computer Vision | Customs and Border Protection (CBP) desires to identify potential Items of Interest (IoI) on vessels more quickly, efficiently, and safely. Provides increased shared situational awareness in real time for CBP and strategic partners, and improves mission planning and agent and officer safety while reducing reactionary gaps. | Customs and Border Protection (CBP) intends to identify potential for Items of Interest (IoI) on vessels through the use of autonomous systems, which will allow CBP to more efficiently and safely increase shared situational awareness, improve mission planning and agent/officer safety, and reduce reactionary gaps. | AI output will include object avoidance, automated mission execution, and may include imagery of potential Items of Interest (IoI). | AI output will include object avoidance, automated mission execution, and may include imagery of potential Items of Interest (IoI). | ||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-234 | Relocatable Multi-Sensor System | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Sensors and CUAS capabilities to support significant events. The AI combines sensor data from RF detections / radar/ and infrared / electro optical cameras. It combines all feeds and provides detections on the user interface back to users. Fully standalone and airgapped system. Looking to identify non-RF drones. System does not have any mitigation capability, system cannot autonomously mitigate an aircraft, it only detects and provides a digital track on the GUI/map for further user investigation. This use of AI does not serve as a principal basis for decision or action. | Classical/Predictive Machine Learning | Border security, detection of Small Unmanned Aircraft Systems (SUAS), and multi-sensor fusion. | The system uses advanced sensor technology to differentiate valid items of interest (IOI), such as unmanned aircraft systems and humans, from other detections such as animals or other environmental objects. By integrating radar and other sensor data, the system filters out false alarms, ensuring more accurate identification of potential IOI. This capability enhances CBP's ability to focus on legitimate risks while minimizing the time spent on non-threatening activities, improving operational efficiency at border and security checkpoints. | The outputs include real-time data identifying and categorizing potential items of interests, while filtering out false or non-relevant items of interest like animals. These outputs are used to provide situational awareness and support decision-making for CBP personnel. | The outputs include real-time data identifying and categorizing potential items of interests, while filtering out false or non-relevant items of interest like animals. These outputs are used to provide situational awareness and support decision-making for CBP personnel. | ||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2363 | Anomaly Detection COV Structure | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Computer Vision is intended to detect anomalies in non-intrusive inspection images. The images are then shared with officers who review the detections within the images (represented as a polygon). If the officer feels physical review is required, the vehicle is moved to secondary inspection for more thorough review by an officer. The output of the AI does not serve as a principal basis for a decision or action. | Computer Vision | The Anomaly Detection Algorithm (ADA) models are intended to solve several key problems for U.S. Customs and Border Protection (CBP) related to the screening of passenger and cargo vehicles. Improving the detection of anomalies and contraband, enhancing efficiency in image review, enhance human capability to consistently detect items of interest or concern, addressing high traffic volumes and resource constraints, and supporting the analysis of complex inspections. | CBP is seeking Anomaly Detection Algorithm (ADA) models capable of operating on CBP systems to enable rapid screening of commercially owned vehicles (CoVs). The objective is to develop a suite of algorithms that enhance CBP's Non-Intrusive Inspection (NII) image analysis, improving the detection of anomalies and contraband. These algorithms are intended to assist CBP officers in efficiently reviewing images, with a particular focus on identifying concealed contraband and anomalies in passenger vehicles and cargo conveyances. The implementation of ADA models will enhance human capability to consistently detect items of interest or concern, including concealed objects, in vehicles entering the United States. Additionally, these algorithms will enhance throughput efficiency at ports of entry, enabling the expedited processing of compliant vehicles while maintaining robust security standards. | Bounding boxes around an anomaly or unidentifiable object(s) within an image or any portion of the image that cannot be identified or explained. | Bounding boxes around an anomaly or unidentifiable object(s) within an image or any portion of the image that cannot be identified or explained. | ||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2365 | Anomaly Detection POV Structure | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Computer Vision is intended to detect anomalies in non-intrusive inspection images. The images are then shared with officers who review the detections within the images (represented as a polygon). If the officer feels physical review is required, the vehicle is moved to secondary inspection for more thorough review by an officer. The output of the AI does not serve as a principal basis for a decision or action. | Computer Vision | The Anomaly Detection Algorithm (ADA) models are intended to solve several key problems for U.S. Customs and Border Protection (CBP) related to the screening of privately owned vehicles (PoVs). Improving the detection of anomalies and contraband, enhancing efficiency in image review, enhance human capability to consistently detect items of interest or concern, addressing high traffic volumes and resource constraints, and supporting the analysis of complex inspections. | CBP is seeking Anomaly Detection Algorithm (ADA) models capable of operating on CBP systems to enable rapid screening of passenger and cargo vehicles. The objective is to develop a suite of algorithms that enhance CBP's Non-Intrusive Inspection (NII) image analysis, improving the detection of anomalies and contraband. These algorithms are intended to assist CBP officers in efficiently reviewing images, with a particular focus on identifying concealed contraband and anomalies in passenger vehicles and cargo conveyances. The implementation of ADA models will enhance human capability to consistently detect items of interest or concern, including concealed objects, in vehicles entering the United States. Additionally, these algorithms will enhance throughput efficiency at ports of entry, enabling the expedited processing of compliant vehicles while maintaining robust security standards. | Bounding boxes around an anomaly or unidentifiable object(s) within an image or any portion of the image that cannot be identified or explained. | Bounding boxes around an anomaly or unidentifiable object(s) within an image or any portion of the image that cannot be identified or explained. | ||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2565 | CBP Common AI Service(CCAIS) Image Analysis and Data Correlation | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case is not yet live in production, and the current ATT only covers for within the CCAIS platform. The use case needs further review, prior to the ATT for ICAD integration. The model can identify people, animals, and vehicles as well as extract license plate information, and we will need to determine how exactly the extracted information will be used for operations if it is determined to be feasible. The AI inventory team will need to review once the AI Use Case is fully defined. | Generative AI | Reduces the amount of time agents needs to spend analyzing imagery but automatically flagging images for review. | The AI’s intended purpose is to solve the challenge of efficiently monitoring typically unoccupied or restricted environments for unauthorized human or vehicle presence. It automates the initial detection of such activities, which can be resource-intensive and prone to delays when done manually. The expected benefits include enhanced security of sensitive areas, increased operational efficiency through automation, and the ability to scale oversight and respond more effectively to potential incidents, and better protection of public assets, sensitive environments, and critical infrastructure, alongside more efficient use of public resources in security operations. | The AI System outputs image analysis of what is in the image. It can Identify vehicles and determine the type and license plate number. It can identify if people are present and if they are armed. It can identify environment and conditions. The output is providing through bounding boxes around items of interest within the image and through textual descriptions. | The AI System outputs image analysis of what is in the image. It can Identify vehicles and determine the type and license plate number. It can identify if people are present and if they are armed. It can identify environment and conditions. The output is providing through bounding boxes around items of interest within the image and through textual descriptions. | ||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2568 | Non-intrusive vessel and object detection tool (Tethered Aerostat Radar System (TARS)) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI will analyze images from the sensors to determine what the item is in the image (e.g. Vessel). The image location will also be correlated with any publicly available AIS (Automated Identified System) data for maritime traffic. AIS is used as a filtering mechanism in most cases to filter out legitimate traffic. User is provided with information for detected objects lacking AIS signal. After the alert of a detection, a trained CBP agent or user, reviews the image to identify and classify the activity taking place. This use of AI does not serve as the principal basis for decisions or actions. | Computer Vision | Detect, identify and classify vessels and objects in the maritime environment | Utilizing AI to automate maritime object detection with real-time outputs to streamline data flows intended to increase efficiency of existing resources and minimize mission critical decision-making timeline. | The AI will analyze images from the sensors to determine what the item is in the image (e.g. Vessel). The image location will also be correlated with any publicly available AIS (Automated Identified System) data for maritime traffic. AIS is used as a filtering mechanism in most cases to filter out legitimate traffic. User is provided with information for detected objects lacking AIS signal. After the alert of a detection, a trained CBP agent or user, reviews the image to identify and classify the activity taking place. This use of AI does not serve as the principal basis for decisions or actions. | The AI will analyze images from the sensors to determine what the item is in the image (e.g. Vessel). The image location will also be correlated with any publicly available AIS (Automated Identified System) data for maritime traffic. AIS is used as a filtering mechanism in most cases to filter out legitimate traffic. User is provided with information for detected objects lacking AIS signal. After the alert of a detection, a trained CBP agent or user, reviews the image to identify and classify the activity taking place. This use of AI does not serve as the principal basis for decisions or actions. | ||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-317 | RAPTOR (Rapid Tactical Operations Reconnaissance) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI system processes data from radar, infrared sensors, and video surveillance to detect and track suspicious activities along U.S. borders. By incorporating AI-powered vessel registration, aircraft tail number, license plate, and object detection RAPTOR significantly boosts domain awareness. If the OCR can capture vessel or license plate information, that information is run through CBP Super Query and if derogatory information is returned or other information from the imagery (high number of gas cans in the image) the information is sent through SMS or email to the field for response to the potential activity. Agents log into RAPTOR and reviews for accuracy/validity of image to avoid any investigation of the wrong boat. Neither the AI or output of AI serves as a principal basis for a decision or action. Human review takes place prior to any final decision to act, and then personal interaction leads to any follow-up decisions. | Computer Vision | Provide Tactical Domain Awareness for CBP Agents, making law enforcement efforts more efficient. | RAPTOR will significantly increase domain awareness and the agency’s ability to engage in intelligence-driven operations. The AI capability acts as a force multiplier and saves personnel from analyzing video feed from a stationary camera and manually noting all boat identifiers, improving their ability to respond quickly to potential threats and gather critical intelligence for law enforcement and border control operations. | Text transcription of vessel registration/documentation data and photographs of the vessel. | Text transcription of vessel registration/documentation data and photographs of the vessel. | ||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-204 | Semantic Search and Summarization for Investigative Data | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case provides a natural language contextual search and summarization capability against existing Reports of Investigation and other investigative data, producing more relevant search responses and investigative insights in the form of data, information leads or connections that HSI personnel can use to inform investigations. The AI output (search queries and responses) provide investigators with easy-to-read search responses that are accompanied by links to source material for further analysis. Any data used to produce these investigative insights are first obtained through legal means and processes for the purposes of law enforcement investigations and do not significantly impact the categories listed in the definitions of “high-impacting AI.” Personnel may use these insights for law enforcement purposes in ongoing investigations with existing targets to assist in activities such as producing risk assessments about individuals or identifying criminal suspects; however, all insights are reviewed and validated by both personnel and supervisors before being included in any official case management system, and do not serve as a principal basis for enforcement decisions. Any enforcement decisions related to these insights are outcomes of the full Federal Investigation Process involving verifying any insights as evidence. | Generative AI | This use case intends to solve the problem of searching and extracting relevant information from large volumes of unstructured investigative data. | The benefits of using this AI include increased efficiency, reduced risk of missing valuable information, and enhanced investigative capabilities. | The outputs of this AI technology are the extracted relevant information and summaries. | The outputs of this AI technology are the extracted relevant information and summaries. | ||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2552 | Mobile Device Forensics for Investigations | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case falls into a presumptive high-impact category under M-25-21 Section 6 (j) due to its role in supporting law enforcement activities, specifically application of digital forensic techniques. However, it does not meet the definition of high-impact AI as outlined in OMB Memorandum M-25-21 because its outputs do not serve as a "principal basis" for decisions or actions with legal, material, binding, or significant effects on an individual or entity's civil rights, civil liberties, or privacy; or human health and safety. The AI capabilities, including classification, decoding, and origin analysis, are designed to assist analysts in prioritizing human review rather than independently determining outcomes. For example, AI-generated tags and app artifacts are suggestions that require manual validation and further investigation before enforcement actions are taken. All outputs are reviewed as part of a broader investigative process before any actions are taken. | Computer Vision | The AI is intended to solve the problem of analysts having to manually organize, decode, and review large volumes of complex mobile device data, which makes it difficult to quickly identify information relevant to an investigation. | AI-generated category tags, app artifacts, and origin classifications allow analysts to more effectively prioritize human review of mobile device data that may be responsive to the investigation. | The platform’s AI outputs include: (1) AI‑suggested category tags for extracted media (videos and images) and apps, based on user‑selected categories (e.g., media: cars, drugs, weapons; apps: chat, spoofing, cryptocurrency);(ii) AI‑decoded app data artifacts, such as chats, contacts, and locations; and (iii) AI predictions about whether a media file was captured on the extracted device or obtained from another source. | The platform’s AI outputs include: (1) AI‑suggested category tags for extracted media (videos and images) and apps, based on user‑selected categories (e.g., media: cars, drugs, weapons; apps: chat, spoofing, cryptocurrency);(ii) AI‑decoded app data artifacts, such as chats, contacts, and locations; and (iii) AI predictions about whether a media file was captured on the extracted device or obtained from another source. | ||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2595 | Open-Source Intelligence for Lead Identification and Targeting | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case falls into a presumptive high-impact category due to its role in supporting law enforcement activities, specifically in identifying individuals who pose risks to community safety or violate U.S. immigration laws. However, it does not meet the definition of high-impact AI as outlined in OMB Memorandum M-25-21 because its outputs do not serve as a "principal basis" for decisions or actions with legal, material, binding, or significant effects on an individual or entity's civil rights, civil liberties, or privacy; or human health and safety. The AI modules, including risk extraction, image analysis, language detection, and AI chat, are designed to augment traditional investigative processes by providing structured annotations and insights for analysts to review. These outputs are explicitly described as supporting tools that require human validation and integration with other government data holdings before any enforcement action is taken. Furthermore, the AI system operates as a supplementary tool, consolidating and organizing information to enhance efficiency, but does not independently produce outcomes that directly affect civil rights, civil liberties, or privacy. The safeguards in place, including human validation and adherence to established legal standards, ensure that the AI outputs remain supportive rather than determinative, confirming that this use case does not meet the high-impact definition. | Natural Language Processing (NLP) | The AI is intended to solve the problem of traditional manual open‑source searches missing relevant identifiers or connections in large volumes of online information. | The platform’s AI capabilities reduce the time and effort required to sift through large datasets, improve the ability to uncover relevant information, and enhance the overall efficiency and effectiveness of ICE enforcement operations. | The platform utilizes AI modules to assist ICE Enforcement and Removal Operations (ERO) in open-source research and investigations. The risk extraction capability uses AI to identify and classify potential risks within documents, such as references to criminal activity or connections to organizations of concern, and generates structured annotations for analysts to review. The platform also includes AI-powered translation, which allows analysts to work with multilingual content, and image analysis, which detects and extracts objects from images linked to documents to provide additional investigative context. Additionally, the system can analyze language within documents to highlight text that may indicate threats or planned violence, drawing attention to sections that require closer examination. An AI chat interface further supports analysts by enabling real-time, conversational queries and responses, making it easier to surface insights and context from large datasets. | The platform utilizes AI modules to assist ICE Enforcement and Removal Operations (ERO) in open-source research and investigations. The risk extraction capability uses AI to identify and classify potential risks within documents, such as references to criminal activity or connections to organizations of concern, and generates structured annotations for analysts to review. The platform also includes AI-powered translation, which allows analysts to work with multilingual content, and image analysis, which detects and extracts objects from images linked to documents to provide additional investigative context. Additionally, the system can analyze language within documents to highlight text that may indicate threats or planned violence, drawing attention to sections that require closer examination. An AI chat interface further supports analysts by enabling real-time, conversational queries and responses, making it easier to surface insights and context from large datasets. | ||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2667 | Global Maritime Intelligence | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case does not meet the definition of "high-impact" as outlined in Section 5 of M-25-21 because its outputs do not serve as a "principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety." Instead, the AI system generates intelligence reports and risk assessments that are used to support human decision-making. Analysts review the outputs and initiate follow-on actions after validating the source data, such as inspections or investigations, ensuring that the AI's outputs are not the sole or principal basis for these decisions. While the use case aligns with a presumed high-impact category due to its critical role in maritime safety and law enforcement, it does not meet the stricter definition of high-impact because its outputs are advisory and produce leads that must be validated as part of the investigative process prior to action being taken. | Classical/Predictive Machine Learning | The AI is intended to solve the problem of investigators having to manually piece together fragmented maritime activity data from many sources, which makes it difficult to see relationships among vessels, shipments, and ports and identify potential leads on illicit maritime activity. | The use of AI in this process helps Homeland Security Investigations quickly identify potential threats, improves the efficiency of intelligence operations, and enables faster responses to maritime risks that would be difficult to detect through manual analysis alone. | The platform uses several machine learning (ML) models and other AI techniques to process and analyze large volumes of maritime data from multiple sources, such as satellite imagery, Automatic Identification System (AIS) signals, and transactional maritime data. The platform’s AI models detect patterns and anomalies that may indicate potential threats or behaviors consistent with illicit activities like smuggling or trafficking. These AI-generated insights are incorporated into detailed intelligence reports and risk assessments for platform users. These outputs support HSI analysts’ decision-making and are reviewed in conjunction with other HSI data holdings to determine whether analysts should take follow-up actions, such as investigations, into flagged entities. | The platform uses several machine learning (ML) models and other AI techniques to process and analyze large volumes of maritime data from multiple sources, such as satellite imagery, Automatic Identification System (AIS) signals, and transactional maritime data. The platform’s AI models detect patterns and anomalies that may indicate potential threats or behaviors consistent with illicit activities like smuggling or trafficking. These AI-generated insights are incorporated into detailed intelligence reports and risk assessments for platform users. These outputs support HSI analysts’ decision-making and are reviewed in conjunction with other HSI data holdings to determine whether analysts should take follow-up actions, such as investigations, into flagged entities. | ||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-48 | Email Analytics for Investigative Data | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case provides more efficient data processing for HSI personnel. The AI output may be used to produce investigative insights in the form of data, information leads or connections that HSI personnel can use to inform investigations, but the output itself is data preparation and organization so HSI personnel can produce those leads when combining the AI output with the personnel’s expertise and other relevant investigative data and information. Personnel may use these insights for law enforcement purposes in ongoing investigations with existing targets to assist in activities such as producing risk assessments about individuals or identifying criminal suspects; however, all insights are reviewed and validated by both personnel and supervisors before being included in any official case management system, and do not serve as a principal basis for law enforcement action or decision. Any enforcement decisions related to these insights are outcomes of the full Federal Investigation Process involving verifying any insights as evidence (including validating AI-translated material by a certified interpreter), presentation to a U.S. Attorney’s Office and potentially a District Court judge, decision to prosecute, judicial review, and trial and sentencing. | Natural Language Processing (NLP) | This use case intends to solve the problem of the time-consuming and resource-intensive process of preparing multilingual email data for analysis. | Homeland Security Investigations (HSI) personnel encounter large volumes of legally acquired, multilingual email data that must be prepared (ingested, triaged, translated, searched and filtered) before it can be analyzed to support investigations. The email analytics workflow eliminates manual data preparation processes, and leverages machine learning to conduct spam message classification, translation, and entity extraction, including names, organizations, or locations. It also utilizes HSI's AI-enabled translation capabilities (see related use case “Translation and Transcription for Investigative Data”) for translation of emails in other languages to English. The output reduces time and resources spent preparing data, increases the analytic utility of the data, and allows HSI personnel to more quickly conduct analysis on the information. | The output is email data that has undergone spam message classification, translation, and entity extraction, including names, organizations, or locations. | The output is email data that has undergone spam message classification, translation, and entity extraction, including names, organizations, or locations. | ||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-417 | Machine Learning Analysis Applied to Cyber Threat Hunt Data | a) Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not produce an action or serve as a principal basis for a decision that has the potential to significantly impact the safety of human-life or well-being, climate or environment, critical infrastructure, or strategic assets or resources. This use case detects anomalies or outliers, and also may be used to classify or categorize certain activity within data that has already been collected during an authorized cyber threat hunt operation within TSA’s networks. The AI outputs are reviewed by a human analyst team to determine if any of the patterns might be associated with unusual activity. The analyst would then continue to investigate further as during normal threat hunt operations. | Classical/Predictive Machine Learning | The Use Case addresses the problem of how to maximally assess available data to identify anomalies and other patterns that may inform the cyber threat hunt process. | Cyber threat hunts typically involve a vast amount of data. Machine learning models can quickly and efficiently process this data as well as more effectively identify anomalous activity than humans. This could improve the efficiency and quality of cyber threat hunts by detecting suspicious behavior more quickly and increasing the amount of data that can be analyzed during a hunt. | Currently it is a list of potential anomalies or outliers within the system, but development is still ongoing. | Currently it is a list of potential anomalies or outliers within the system, but development is still ongoing. | ||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-2627 | Extended Automated Name Harvesting (eANH) | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | No. The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. The use case increases efficiency of tasks associated with the accurate and timely identification, analysis, and review of biographical information needed for adjudication. The AI outputs are suggested aliases and DOBs related to the individual query, which USCIS staff must review to accept, reject, or ignore the suggested information. The AI outputs reduce the amount of adjudicative time spent manually harvesting aliases and DOBs. The use case increases efficiency of tasks associated with reviewing existing records for adjudicating requests for immigration benefits. Completing such adjudications are not dependent on the use case however lack of this tool would significantly increase human processing times and potentially reduce the accuracy of information consulted during the human review process. | Natural Language Processing (NLP) | OIT is developing a solution that systematically extracts text from evidence documents, identifies aliases and DOBs from the extracted text. | Since users no longer need to read through the entire set of case evidence, which is often hundreds of pages, this should decrease case processing time while retaining the same or better performance. | Extracted names and DOBs | Extracted names and DOBs | ||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2362 | AI to generate testable synthetic data | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Improve integration with trade partners and reduce implementation time of new capabilties. | The purpose of using AI to generate test data for trade partners is to create more realistic data for trade partners to use for testing their systems before releasing new ACE capabilities. Currently there are many data issues where test data does not accurately reflect real production data which results in unrealistic failures when testing which result in wasted time and resources tracking down false positive errors during testing. By providing trade partners with more realistic test data the expectation is that testing times will be shorter and enhancements and capabilities can be delivered quicker. | The AI capability would generate test data without PII or other trade sensitive data and allow for more accurate simulation of production data. | The AI capability would generate test data without PII or other trade sensitive data and allow for more accurate simulation of production data. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2375 | Thermal Power Generation with Geoseismic IoI Detection and Classification | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Overcomes the following problems: 1) the cost of replacement batteries, 2) the time it takes agents to constantly replace batteries, and 3) the problem of revealing the UGS location when replacing the batteries. | Utilize seismic sensor data to determine Item of Interest in deployed locations. Increases situational awareness in austere environments and reduces need for battery replacement due to self-charging. | Alert with the classification and confidence interval for the Item of Interest (IoI). | Alert with the classification and confidence interval for the Item of Interest (IoI). | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2444 | API Security Vulnerability Technology | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Discover, ingest, and analyze APIs to create and run thousands of custom attack scenarios against every build prior to production. Catch security vulnerabilities as early in the software development life cycle (SDLC) as possible. Educate and empower security and dev teams on sound API security strategies. | With the necessity of leveraging application programing interfaces (APIs) for applications across the enterprise, this AI technology is intended to run thousands of custom attack scenarios against APIs on a continuous basis. This will help to identify potential security vulnerabilities prior to production level deployments, and enable the enterprise to develop essential remediations, if required. Additionally, the techonlogy is intended to provide continuous monitoring on APIs to provide real-time alerts on new potential vulnerabilities. | The AI system is intended to output custom reports on identified vulnerabilities within application programing interfaces (APIs). These reports will Include the identified risk and a summary of the risk, as well as tailored remediation guidance based on the applications, environment, data, and tests conducted for end-users. | The AI system is intended to output custom reports on identified vulnerabilities within application programing interfaces (APIs). These reports will Include the identified risk and a summary of the risk, as well as tailored remediation guidance based on the applications, environment, data, and tests conducted for end-users. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2446 | Cyber Threat Detection | a) Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Cyber deception is a proactive cybersecurity strategy that involves creating a network of deceptive elements, such as decoys to mislead and divert potential attackers. By strategically deploying these deceptive artifacts across an organization’s network, cyber deception aims to confuse attackers and delay their progress. | Cyber deception is used alongside other cybersecurity measures to enhance overall security posture. Cyber deception not only enhances threat detection but also provides valuable insights into attacker behavior, aiding in the development of more effective defense strategies and minimizing the risk of successful cyberattacks. Cyber deception technology plays a crucial role in enhancing cybersecurity defenses by enabling organizations to detect threats faster and decrease attacker dwell time. | When attackers engage with the deceptions, they reveal their presence and tactics, allowing security teams to detect, analyze, and respond to threats in real time. This proactive approach not only reduces the time attackers spend within the network but also provides valuable insights into their tactics, techniques, and procedures (TTPs). | When attackers engage with the deceptions, they reveal their presence and tactics, allowing security teams to detect, analyze, and respond to threats in real time. This proactive approach not only reduces the time attackers spend within the network but also provides valuable insights into their tactics, techniques, and procedures (TTPs). | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2447 | Forced Labor Virtual Consultant | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Customs and Border Protection (CBP) enforces U.S. laws against importing goods made with forced labor by analyzing extensive data and addressing complex global supply chains. CBP aims to leverage advanced tools to protect U.S. economic security and uphold its leadership in combating forced labor. | The system aims to support Customs and Border Protection (CBP) analysts by integrating internal forced labor databases with preloaded trend analysis data, enabling rapid risk identification and report generation. It complements CBP’s data science team by providing faster access to critical information, enhancing enforcement efficiency. | Plain-language reports, analysis, direct LLM response to user queries. | Plain-language reports, analysis, direct LLM response to user queries. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2449 | Multi-media Insight Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | The AI system solves the challenge of analyzing large volumes of surveillance footage and audio data efficiently. It enables Customs and Border Protection (CBP) to conduct real-time searches for objects, sounds, language, and events, detect anomalies, and track items of interest across multiple video streams concurrently. By providing outputs like real-time alerts and visual annotations, the system enhances situational awareness, streamlines investigations, and supports faster decision-making to improve border security and operational efficiency. | The system aims to enhance CBP's ability to monitor and analyze surveillance footage from existing CBP camera technology, enabling real-time detection of anomalies, and tracking of Items of interest within the video frame. It will improve situational awareness, streamline the object and event identification process, and support faster, more accurate decision-making, ultimately enhancing security and operational efficiency. Essentially, this technology will allow users to review significant amounts of historical imagery to identify objects, scenes, and activities of interest and reduce the manual burden of searching multiple video streams. | Outputs include real-time alerts on activities that should be watched, visual annotation to identify items of interest. | Outputs include real-time alerts on activities that should be watched, visual annotation to identify items of interest. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2452 | Source Code Development Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The tool uses generative artificial intelligence (GenAI) powered by large language models (LLMs) and coding foundation models to assist with software development. The AI serves as a coding assistant, allowing users to create, refine, and complete software projects through natural language commands and queries. It generates functional software code, streamlining the development process, reducing manual effort, and improving efficiency. | The tool is designed to enhance software development efficiency by enabling end users to create, refine, and complete projects more quickly through the use of a generative artificial intelligence (AI) coding assistant. By automating portions of the coding process, the tool reduces development time, optimizes workflows, and improves overall productivity. | The tool generates functional software code for end users, enabling them to efficiently develop, refine, and complete projects with minimal manual effort, delivering high-quality code outputs tailored to user prompts and project requirements. | The tool generates functional software code for end users, enabling them to efficiently develop, refine, and complete projects with minimal manual effort, delivering high-quality code outputs tailored to user prompts and project requirements. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2540 | Open Metadata | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Metadata catalog identification. | The AI uses a large language model to generate proposed descriptions for field names in our data catalog. The tool itself does not come with Generative AI, but we have built the capability to leverage generative AI for description generation and test case quality generation. Roughly 80% of our data assets in databases do not have descriptions. The use of AI to generate these descriptions would allow us to provide descriptions for our vast data assets. The quality test cases ensure that the data in our systems are accurate, correct, and consistent. These AI tools can be turned on or off. | Proposed descriptions for database field names and generation for quality test cases. | Proposed descriptions for database field names and generation for quality test cases. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2551 | Automated Incident Creation (IT) | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to reduce the time and effort required of a business analyst to manually review contact messages submitted to the Business Connection (BC) and Technical Reference Model (TRM) teams. By evaluating these messages, the AI will determine whether a ServiceNow helpdesk ticket is needed to resolve the submitted question or concern. This automation will streamline the process and ensure timely resolution of user issues. | AI will be used to drive automation via integration with ServiceNow, automatically triaging requests to determine if a ServiceNow helpdesk ticket should be created. The determination and justification will populate a field in the table that stores user contact messages, enabling validation by a business analyst before finalizing ticket creation. | Responds with a Yes/No determination and a justification of the determination, which is stored in the contact message table for validation by a business analyst. | Responds with a Yes/No determination and a justification of the determination, which is stored in the contact message table for validation by a business analyst. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2566 | TRM Classification Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The system uses Gemini 1.5 to perform multi-label classifications, including product categorization, subclassification, and reasoning for the classification outputs, replacing an existing platform to reduce costs and improve efficiency. | The system uses AI to categorize vendor product capabilities, reducing costs and improving efficiency by replacing a previous system. The classifications are integrated into the Technical Reference Model (TRM). | Multi-label output of vendor product classifications, subclassifications, using reasoning for its classification output. | Multi-label output of vendor product classifications, subclassifications, using reasoning for its classification output. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2567 | FOIA processing automated redaction (RedactAI) | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This use case leverages AI technology to streamline the identification and redaction of sensitive content in documents related to FOIA requests. Using Vertex AI and other GCP services, the system identifies and categorizes content that may require redaction based on specific controls. This automation is expected to significantly reduce processing times compared to manual redaction, enhancing efficiency in handling FOIA requests. The system provides recommendations for content to be redacted. Users are required to review and manually approved the suggested redactions based on the appropriate FOIA exemptions. | AI is being used to find content that may need to be redacted in documents related to FOIA requests and applicable FOIA exemptions. The benefits are expected to be much faster processing time for the requests as opposed to manually performing the task. | Output is recommendation of content to be redacted. Users are required to review and manually approve the suggested redactions based on the appropriate FOIA exemptions. | Output is recommendation of content to be redacted. Users are required to review and manually approve the suggested redactions based on the appropriate FOIA exemptions. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2569 | Speech Assist Virtual Interview Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Accurate audio processing automation and draft transcription generation to reduce the manual labor intensive associated with transcribing audio interviews and reports, while ensuring Officer-vetted accuracy and nuance. | To enhance operational efficiency by accelerating the processing and analysis of audio data, resulting in both reduced labor burden and compressed workflow timelines. Speech Assist will offer diarized transcriptions of conversations alongside conversation summaries and named entities extracted from audio captured for various CBP mission use cases. The output will be used to automatically draft a document for review by CBP officers and agents. Speech Assist will reduce the time required for the interpretation, and comprehension of key information from audio data, reducing the time required for transcribing audio reports. | The AI system will provide original audio clips alongside feature-rich output reports that include diarized transcriptions, transcript summaries, speaker summaries, and named entities extracted from the audio clips. The diarized transcript will consist of audio segments from the audio clip containing the start and end time of the audio segment, a unique label for the speaker (e.g., Speaker A, Speaker B, etc.), their verbatim speech in the original language. The output is provided as a highly structured nested JSON containing free text that is then used to automatically generate a draft transcription for review. | The AI system will provide original audio clips alongside feature-rich output reports that include diarized transcriptions, transcript summaries, speaker summaries, and named entities extracted from the audio clips. The diarized transcript will consist of audio segments from the audio clip containing the start and end time of the audio segment, a unique label for the speaker (e.g., Speaker A, Speaker B, etc.), their verbatim speech in the original language. The output is provided as a highly structured nested JSON containing free text that is then used to automatically generate a draft transcription for review. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2571 | Business Automation and Improved Search Ability with AI | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The Business Connection (BC ) and Technical Reference Model (TRM) teams have identified areas in which business automation can accelerate use for both business analysts in making determinations and improving the ability of end-users to search for the information that they need more quickly. | AI will be used to drive automation via integration with ServiceNow, automatically classifying products, suggest alternatives, validate the vendors headquarters and operations, summarize meeting notes, and improve searching capabilities. | The AI will output a potential product classification and sub-classification that would best fit the product within the TRM and BC. The vendor vetting feature will output the headquarters of a vendor and a summary of its operations, and a summarization of meeting notes. Searching will output a list of relevant products from a database in ServiceNow. | The AI will output a potential product classification and sub-classification that would best fit the product within the TRM and BC. The vendor vetting feature will output the headquarters of a vendor and a summary of its operations, and a summarization of meeting notes. Searching will output a list of relevant products from a database in ServiceNow. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2586 | CodeGen | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Workforce enablement of AI developers. | The primary purpose is to provide code generation support for CBP developers, with the anticipated benefit of significantly increasing their efficiency and productivity. By automating repetitive coding tasks and suggesting optimal solutions, this initiative aims to free up developers to focus on more complex and strategic projects, ultimately accelerating the delivery of critical work. | The system's output consists of generated code snippets and/or textual explanations and suggestions related to code development. This output is designed to assist developers in writing, understanding, and debugging code more effectively. | The system's output consists of generated code snippets and/or textual explanations and suggestions related to code development. This output is designed to assist developers in writing, understanding, and debugging code more effectively. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2587 | CounselAI | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | CounselAI will minimize the manual replication of work product, strengthen CBP’s ability to provide consistent responses across the agency’s sizable litigation portfolio, and enable OCC to better defend against legal challenges. | CounselAI will allow OCC to be more efficient and effective. The LLM uses existing work product to help identify similar legal challenges and generate successful draft language for use in litigation. The LLM will also save users time with its search and summarization features. | CounselAI will answer user questions and generate content. The AI output is in chat format, answering user questions about the “knowledgebase” of data. The AI output will also include LLM generated content, such as draft responses and documents to be used in litigation. | CounselAI will answer user questions and generate content. The AI output is in chat format, answering user questions about the “knowledgebase” of data. The AI output will also include LLM generated content, such as draft responses and documents to be used in litigation. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2663 | Border Infrastructure Center of Excellence AI (BICE AI) | a) Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The BICE AI system automates the creation of precise legal descriptions, parcel reports, and boundary maps from geospatial data, reducing manual effort, errors, and time. By leveraging generative AI, it ensures accuracy, compliance, and efficiency, enabling real estate professionals to focus on higher-value tasks and strategic decision-making. | The intended purpose of the BICE AI is to automate the generation of precise legal descriptions from geospatial data. It's designed to act as an expert assistant for real estate professionals. The expected benefits are increased efficiency, accuracy, and compliance in creating legal documentation. By automating this process, the system frees up specialists to focus on higher-value tasks, reducing the reliance on manual drafting and review by subject matter experts. | The BICE AI system simplifies documentation and planning tasks for real estate professionals by generating several valuable outputs. Its primary feature is the creation of professional-grade legal descriptions, which convert geospatial data—such as coordinates and bearings—into detailed, accurate text in a natural-language format suitable for legal use. This ensures precision and compliance in property documentation. The system also produces parcel intelligence reports, summarizing ownership details, parcel IDs, and property boundaries by analyzing user-defined areas alongside existing parcel data. These reports help professionals quickly understand property ownership and identify adjacent or impacted parcels. In addition, BICE AI generates metes and bounds maps, offering visual representations of property boundaries that include bearing and distance information. These maps can be exported as image files (e.g., PDF or PNG) for easy inclusion in deeds, permits, and other legal documents. The system also provides domain-specific summaries, delivering tailored insights for specific needs, such as parcel acquisition justifications or property encumbrance reports. These outputs enable real estate professionals to make informed decisions efficiently, enhancing their ability to manage property-related tasks effectively. | The BICE AI system simplifies documentation and planning tasks for real estate professionals by generating several valuable outputs. Its primary feature is the creation of professional-grade legal descriptions, which convert geospatial data—such as coordinates and bearings—into detailed, accurate text in a natural-language format suitable for legal use. This ensures precision and compliance in property documentation. The system also produces parcel intelligence reports, summarizing ownership details, parcel IDs, and property boundaries by analyzing user-defined areas alongside existing parcel data. These reports help professionals quickly understand property ownership and identify adjacent or impacted parcels. In addition, BICE AI generates metes and bounds maps, offering visual representations of property boundaries that include bearing and distance information. These maps can be exported as image files (e.g., PDF or PNG) for easy inclusion in deeds, permits, and other legal documents. The system also provides domain-specific summaries, delivering tailored insights for specific needs, such as parcel acquisition justifications or property encumbrance reports. These outputs enable real estate professionals to make informed decisions efficiently, enhancing their ability to manage property-related tasks effectively. | |||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-311 | Integrated Defense and Security Solutions (IDSS) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This system provides assistance in detecting high-risk parcels in the international commerce space | The system improves the screening efficiency and accuracy of contraband detection in international express consignment and mail inspection. | The system provides a segmented image, highlighting anamolies for further inspection by CBP personnel. | The system provides a segmented image, highlighting anamolies for further inspection by CBP personnel. | |||||||||||||||||||||
| Department Of Homeland Security | CISA | DHS-107 | Malware Reverse Engineering | a) Pre-deployment – The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This AI capability uses deep learning to assist CISA analysts with understanding the content of malware samples, automating tasks such as triage and indicator extraction. | This use case delivers improved internal government tools for reverse engineering of malware and speeding the development of cyber threat intelligence that can be shared across the government and with CISA partners. Threat actors can leverage the same malware for long periods of time, so having the ability to improve analysis and generation of shareable cyber threat intelligence forces threat actors to spend more resources generating new malware. Machine learning and other analytical tools are leveraged to guide malware analysts and automate elements of the manual reverse engineering process. Automation of tasks such as triage and indicator extraction allow threat hunting analysts to meet high demand and focus more on adversary response. | A report is generated from malware samples submitted to the analysis pipeline that is then used by human analysts to facilitate the malware triage process. Additional recommendations are displayed via plugins to reverse engineering tools. | A report is generated from malware samples submitted to the analysis pipeline that is then used by human analysts to facilitate the malware triage process. Additional recommendations are displayed via plugins to reverse engineering tools. | |||||||||||||||||||||
| Department Of Homeland Security | CISA | DHS-2335 | Draft Tailored Summaries of Media Materials for Different Publication Channels | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This AI capability accelerates the process of drafting summarized content for CISA’s published products. | The Large Language Models (LLMs) will summarize information from historical publications and pre-production documents intended for external publication. | An interface is provided for pre-publication documents to be uploaded. The system will then automatically generate appropriate messaging using approved templates and tag the documents for review prior to publication. | An interface is provided for pre-publication documents to be uploaded. The system will then automatically generate appropriate messaging using approved templates and tag the documents for review prior to publication. | |||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2709 | Spend Plan Analysis GPT | a) Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Agentic-AI | This tool will augment the wider workforce at FEMA to interrogate spend plan data as stored in PBIS/FEMADex to gain quick insights and rapid responses to data calls or requests for information. | The tool will also allow greater insights for the FEMA leadership to understand where FEMA plans to spend their provided budget authority and where it was actually spent. | Leverages Azure Commercial OpenAI within the FEMA system boundary, currently leveraging ChatGPT4o. The tool will provide responses to user queries based on the Spend Plan and Actual execution data. | Leverages Azure Commercial OpenAI within the FEMA system boundary, currently leveraging ChatGPT4o. The tool will provide responses to user queries based on the Spend Plan and Actual execution data. | |||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2712 | Administrative & Productivity Support for IRC Resource Library | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Long-term disaster recovery requires analyzing large amounts of information stored in multiple locations. IRC typically reviews past recovery plans and projects, available federal and state funding options, considers community needs, and other recovery strategies used in the event. High-level reviews are necessary for FEMA Program Areas (IA, PA, etc.) on potential funding gaps and cost share support. Synthesizing that information from the various storage locations, including other departments, IRC SharePoint, and TRAX poses some challenges. | This use case is designed to solve these challenges by reducing inconsistency in manual research, improving access to historical knowledge, and helping staff quickly identify relevant recovery resources. The purpose of this tool is to assist in the identification of patterns in current and past disaster events. Earlier identification will expedite situational awareness and the development of recovery needs and strategies. It allows the IRC team to define a recovery approach and deliver funding and resources that match a community’s needs more quickly. With faster decision-support for FEMA staff, more consistent analysis across regions, and better use of federal resources to support communities after disasters we are more capable of delivery of the FEMA mission. | The AI tool will produce a set of organized, easy-to-read listings, such as bibliographies, recovery projects summaries, evaluate strategies from similar disasters, and recommended approaches tailored to the current available funding. These out-puts will be used by the IRC teams to inform planning discussions, prevent benefit duplication, and guide technical assistance to states and local governments. | The AI tool will produce a set of organized, easy-to-read listings, such as bibliographies, recovery projects summaries, evaluate strategies from similar disasters, and recommended approaches tailored to the current available funding. These out-puts will be used by the IRC teams to inform planning discussions, prevent benefit duplication, and guide technical assistance to states and local governments. | |||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2713 | Individual Assistance Document Translation | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This AI use case addresses three inter-related problems: It eliminates delays associated with human translation of documents originally in non-English languages; it reduces the cost of translation from approximately $40.00/document to pennies per document, and the entire document will be translated, providing IA visibility to all the contents (instead of a summary deemed by a contract staff as representative of the original document) and thus making it possible to make fully informed decisions about the survivors’ applications. | AI will be used because it is a technology that can perform direct translation automatically and fast, regardless document length, enabling faster and more accurate case processing, resulting in improved services to survivors and reduced cost to taxpayers. | The output is the English version of survivor-submitted documents that are non-English in their original version. Both the original submission and the translated version will be stored in the document repository as part of the survivor’s application for assistance. They are substantiating documents that support assistance determinations. | The output is the English version of survivor-submitted documents that are non-English in their original version. Both the original submission and the translated version will be stored in the document repository as part of the survivor’s application for assistance. They are substantiating documents that support assistance determinations. | |||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2717 | Grants Manager Artificial Intelligence ChatBot | a) Pre-deployment – The use case is in a development or acquisition status. | Emergency Management | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The chatbot is intended to solve inefficiencies in FEMA’s grants application review process by streamlining access to policy information, reducing the time and effort required for manual research, and simplifying the interpretation of complex policy inquiries. It addresses the need for accurate, consistent, and accessible information to improve decision-making and enhance the efficiency of the PA Program. | The expected benefits include increased operational efficiency, improved accuracy in policy interpretation, and cost savings for FEMA’s mission. By providing quick, standardized responses, the chatbot supports faster and more equitable processing of grants, ensuring disaster survivors receive timely assistance. The system’s analytics and feedback mechanisms allow for continuous improvement. | The AI system, powered by Azure OpenAI Services, generates outputs such as policy-based responses to user queries, concise summaries of complex policies, and historical chat records for reference. It also provides performance insights through a dashboard, tracking usage patterns and response accuracy, and incorporates user feedback to refine its functionality. | The AI system, powered by Azure OpenAI Services, generates outputs such as policy-based responses to user queries, concise summaries of complex policies, and historical chat records for reference. It also provides performance insights through a dashboard, tracking usage patterns and response accuracy, and incorporates user feedback to refine its functionality. | |||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2718 | RTPD Division Services Desktop Automation | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Our government developers lack the capacity and consistency to produce quality code in a timely manner. Our administrative staff facilitate a multitude of processes that require manual intervention. This requires significant overhead and is prone errors and results in simple steps or actions taking longer than necessary as requests get lost in email or other forms of communication. There is also no audit trail or logging outside of a personal email address or limited access shared mailboxes, making it difficult for others to step in a facilitate activities. | AI Coding Assistants can help identify potential issues with code, help our developers troubleshoot more quickly, and begin complex coding more efficiently. This will also enable a team of developers to build a consistency across resources and a repository of reusable code segments to speed the delivery of new features and functions. | In this use case, we can expect higher quality code, reducing errors and defects in working software. Administrative staff will be able to monitor progress vs facilitating it and focus their attention on higher value mission support activities. | In this use case, we can expect higher quality code, reducing errors and defects in working software. Administrative staff will be able to monitor progress vs facilitating it and focus their attention on higher value mission support activities. | |||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2724 | Program Integrity (RRAD-PI) AI Counter-Fraud Enhancement Measures | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The AI use case aims to further expand and address RRAD-PI’s fraud detection and prevention challenges within disaster recovery programs. Specifically, it seeks to mitigate fraudulent activities, identity theft, and deceptive practices that compromise program integrity, ensuring that resources are allocated efficiently and equitably to eligible individuals and entities. | The expected benefits include:_x000D_ • Enhanced Fraud Detection: Improved identification of fraudulent activities, reducing financial losses and ensuring program integrity._x000D_ • Operational Efficiency: Automation of manual processes, such as document review and identity verification, leading to faster application processing and reduced administrative burden._x000D_ • Proactive Fraud Prevention: Early detection of fraud risks enables timely intervention, minimizing harm and protecting public funds._x000D_ • Improved Resource Allocation: Ensures disaster recovery resources are distributed to legitimate recipients, fostering public trust in government programs._x000D_ • Cross-Agency Collaboration: Facilitates secure data sharing across agencies, enabling a unified approach to combating fraud schemes that span jurisdictions._x000D_ • Public Confidence: Strengthened program integrity enhances public trust in the agency’s ability to manage disaster recovery efforts effectively. | The AI model/system may generate outputs such as:_x000D_ • Fraud Risk Scores: Quantitative assessments of fraud likelihood for transactions, applications, or entities._x000D_ • Anomaly Alerts: Notifications of unusual patterns or behaviors indicative of potential fraud._x000D_ • Network Maps: Visual representations of relationships between entities, highlighting connections to fraudulent actors._x000D_ • Document Analysis Reports: Summaries of inconsistencies, deceptive language, or forgery detected in submitted documents._x000D_ • Real-Time Monitoring Flags: Alerts for suspicious activities requiring immediate intervention._x000D_ • Behavioral Biometrics Insights: Reports on user behavior anomalies, such as unusual typing patterns or device usage._x000D_ • Image/Video Verification Results: Validation of authenticity for submitted visual evidence._x000D_ • Threat Intelligence Updates: Integration of external threat data into fraud detection models._x000D_ • Geospatial Analysis Findings: Location-based discrepancies in claims, such as mismatched disaster relief applications._x000D_ • Cross-Agency Fraud Insights: Aggregated data analysis highlighting fraud schemes across jurisdictions. | The AI model/system may generate outputs such as:_x000D_ • Fraud Risk Scores: Quantitative assessments of fraud likelihood for transactions, applications, or entities._x000D_ • Anomaly Alerts: Notifications of unusual patterns or behaviors indicative of potential fraud._x000D_ • Network Maps: Visual representations of relationships between entities, highlighting connections to fraudulent actors._x000D_ • Document Analysis Reports: Summaries of inconsistencies, deceptive language, or forgery detected in submitted documents._x000D_ • Real-Time Monitoring Flags: Alerts for suspicious activities requiring immediate intervention._x000D_ • Behavioral Biometrics Insights: Reports on user behavior anomalies, such as unusual typing patterns or device usage._x000D_ • Image/Video Verification Results: Validation of authenticity for submitted visual evidence._x000D_ • Threat Intelligence Updates: Integration of external threat data into fraud detection models._x000D_ • Geospatial Analysis Findings: Location-based discrepancies in claims, such as mismatched disaster relief applications._x000D_ • Cross-Agency Fraud Insights: Aggregated data analysis highlighting fraud schemes across jurisdictions. | |||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2726 | Semantic Search, Summarization, and Data/Spatial Visualization for NCR Watch COP/Dashboard | a) Pre-deployment – The use case is in a development or acquisition status. | Emergency Management | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The speed of information, particularly in a developing incident, challenges the capacity of Analysts to collect, sort, source, and validate in real time. The volume of information required to confirm with confidence requires time to review. The time focused on these tasks permit less time for moving from basic situational awareness to providing the contextual value to give national capital region (NCR) stakeholders situational understanding, which allows them to fully utilize the information for operational decision-making. The AI would provide a tool for collection and synthesize of the available data, as well as geospatial context. | Expedited continuous scraping of data from identified official and unofficial open sources and comparison against pre-defined critical information requirements with summarization and sourcing of information for review and approval by the FEMA NCR Watch Analyst for publication improves the efficiency and effectiveness of the NCR Watch by allowing Analysts to focus on consuming and validating information and adding value with context. The self-designated federal, state, local, and tribal (FSLT) partners in the NCR who receive NCR Watch products would benefit from greater fidelity in provided information and ability to use the information more easily for operational decision-making. Anticipate the breadth of sources and speed of review through automation to be increased and allow for created contextualization and analysis products by Analysts based on the improved data production, as well as geospatial automation in combination with incident data. Faster Decision-Making: vast amounts of data instantly, enabling rapid identification of critical information for timely responses. Improved Accuracy: potential of reducing human error by filtering irrelevant data and prioritizing actionable insights. Enhanced Situational Awareness: improve the breadth of information analysis and overall awareness of activities in the NCR. Resource Optimization: helps allocate resources efficiently based on real-time needs. Predictive Insights: forecast potential developments, aiding proactive measures, and reduced Information Overload: streamline data, ensuring decision-makers focus on key priorities. | The anticipated output is a Common Operating Picture platform or Dashboard available to FSLT subscribers (free) in the NCR to display incident reporting and associated contextual information, such as the physical location on an ArcGIS map. Simultaneously, a non-public COP will contain additional contextual information for providing internal FEMA reporting. AI will produce continuous data queries and create alerts for Analysts for items meeting configurable critical information requirements and essential elements of information. AI will deliver configurable summary of sourced material on pre-defined CIRs to Analyst for review and publication to internal and/or external COP/Dashboard. AI will have tools for additional contextual analysis and spatial and data visualization for use in the internal and/or external COP/Dashboard. | The anticipated output is a Common Operating Picture platform or Dashboard available to FSLT subscribers (free) in the NCR to display incident reporting and associated contextual information, such as the physical location on an ArcGIS map. Simultaneously, a non-public COP will contain additional contextual information for providing internal FEMA reporting. AI will produce continuous data queries and create alerts for Analysts for items meeting configurable critical information requirements and essential elements of information. AI will deliver configurable summary of sourced material on pre-defined CIRs to Analyst for review and publication to internal and/or external COP/Dashboard. AI will have tools for additional contextual analysis and spatial and data visualization for use in the internal and/or external COP/Dashboard. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-206 | Title III Semantic Search and Summarization for Translated Content | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This use case intends to solve the problem of efficiently searching through and analyzing large volumes of translated evidentiary data, which can be time-consuming and labor-intensive. | The Title III Semantic Search and Summarization functionality will augment translation and transcription services by extracting relevant data using machine learning and natural language processing for correlation and semantic search. Results can then be summarized using a large language model, giving users a tool to target relevant data only. This capability accelerates investigative analysis by rapidly identifying persons of interest, surfacing trends, and detecting networks or fraud, saving hundreds of man hours in manual analysis. HSI will use this tool to generate leads, and further action will be required by investigators and analysts as part of the full investigative process before any action is taken against an individual. | Outputs include semantic search results, concise data summaries, extracted key entities, identified trends and patterns, and generated investigative leads from large volumes of translated and transcribed evidentiary data. | Outputs include semantic search results, concise data summaries, extracted key entities, identified trends and patterns, and generated investigative leads from large volumes of translated and transcribed evidentiary data. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-208 | Policy Analyst Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This use case intends to solve the problem of the significant manual effort it takes SEVP staff to research and respond to regulatory and policy questions related to foreign students in the F and M classes of admission, as well as schools that are either certified or seeking certification to enroll those students. | SEVP is developing an AI-enabled solution to help policy analysts and other SEVP staff quickly find and summarize information about regulations and guidance for foreign students and SEVP-certified schools. This enhanced capability reduces the time required for manual research, enabling SEVP staff to focus on more complex policy and guidance issues. It also ensures consistent and accurate responses across SEVP functions, improving overall efficiency and effectiveness in supporting foreign students and schools. | Generated outputs provide an initial analysis of applicable material, which analysts refine, modify, and review as part of their process. | Generated outputs provide an initial analysis of applicable material, which analysts refine, modify, and review as part of their process. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2402 | SEVP Response Center Chatbot - SID (SEVIS Interactive Dialog) | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This use case intends to solve the problem of handling a high volume of routine inquiries from students and officials, which can overwhelm human agents and delay response times. | AI provides SID the ability to understand voices and its deterministic question and answer workflow (1) enables SID to answer routine caller questions without a help desk agent, and (2) when a help desk agent is required, SID will create a ticket with a caller transcript to reduce the burden on the agent. This frees up the human agents to deal with more complex cases and issues with specific records. | SID answers frequently asked questions from callers. If the SID cannot answer a caller’s question, it turns the caller over to an agent in the response center. The chatbot captures the interaction with the caller and sends the information via an API to Student and Exchange Visitor Program Automated Management System (SEVPAMS). | SID answers frequently asked questions from callers. If the SID cannot answer a caller’s question, it turns the caller over to an agent in the response center. The chatbot captures the interaction with the caller and sends the information via an API to Student and Exchange Visitor Program Automated Management System (SEVPAMS). | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2423 | Digital Records Manager (DRM) User Assistance Chatbot | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI tool, the Digital Records Manager (DRM) User Assistance Chatbot, is designed to address the problem of efficiently searching and gathering information within the Investigative Case Management (ICM) system used by Homeland Security Investigations (HSI). Specifically, it provides immediate, on-demand assistance to investigators by allowing them to pose natural language questions about using the DRM application for case and media management. | This is an AI records search tool to help investigators more efficiently search and gather information. The Digital Records Manager (DRM) User Assistance Chatbot is intended to increase user efficiency by providing answers to commonly asked question without the need to manually refer to documentation or submit a help desk ticket. A reduction in the volume of submitted help desk tickets is expected as a result. | The outputs of the DRM User Assistance Chatbot will be natural language responses to user questions, based on a custom Knowledge Base of DRM documentation artifacts, supported by the natural language capabilities of the backing LLM. | The outputs of the DRM User Assistance Chatbot will be natural language responses to user questions, based on a custom Knowledge Base of DRM documentation artifacts, supported by the natural language capabilities of the backing LLM. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2436 | Burlington Finance Center Voice Bot | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This use case intends to solve the problem of the manual and time-consuming process of handling routine bond inquiries. | Identify and verify the caller, retrieve the status of the bond, and share the bond status with the requester and/or answer administrative FAQs. | It will use Natural Language Understanding (NLU)/Natural Language Processing (NLP) to perform voice-to-text and text-to-voice translations, giving it the ability to recognize voices and meaning. The BFC Voice Bot will be deterministic and will not use Generative AI. Language Translation Technology (LTT) will be used to translate inquiries from Spanish to English and responses from English to Spanish, if needed. | It will use Natural Language Understanding (NLU)/Natural Language Processing (NLP) to perform voice-to-text and text-to-voice translations, giving it the ability to recognize voices and meaning. The BFC Voice Bot will be deterministic and will not use Generative AI. Language Translation Technology (LTT) will be used to translate inquiries from Spanish to English and responses from English to Spanish, if needed. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2467 | Purchase Card Worksheet Analysis | a) Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the problem of reviewers having to manually check each purchase card line item against numerous laws, policies, and contract terms, which is time‑consuming and increases the risk of missed compliance issues. | The AI module is expected to help accelerate compliance checks, streamline processes, and assist in identifying potential risks, while continuously improving through a human-in-the-loop feedback system. This integration is intended to help procurement actions better align with relevant legal and policy requirements. | The outputs of the AI module will include a compliance status for each purchase line item (compliant or non-compliant) and detailed reasoning with citations to the specific policies or legal documents used in the evaluation. The Automated Purchase Card Approval System will use these AI-generated compliance statuses and explanations to help route items and flag potential issues within the approval workflow. AI outputs do not themselves approve or deny purchases. Compliant items may proceed through the workflow consistent with existing business rules, while non-compliant items will be flagged for human review. Human reviewers can access detailed reasoning for non-compliant determinations and may choose to correct requests, raise exceptions, or flag inaccuracies. | The outputs of the AI module will include a compliance status for each purchase line item (compliant or non-compliant) and detailed reasoning with citations to the specific policies or legal documents used in the evaluation. The Automated Purchase Card Approval System will use these AI-generated compliance statuses and explanations to help route items and flag potential issues within the approval workflow. AI outputs do not themselves approve or deny purchases. Compliant items may proceed through the workflow consistent with existing business rules, while non-compliant items will be flagged for human review. Human reviewers can access detailed reasoning for non-compliant determinations and may choose to correct requests, raise exceptions, or flag inaccuracies. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2513 | Natural Language Search for Legal Case Management | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is intended to solve the problem of Office of the Principal Legal Advisor (OPLA) personnel having to craft complex searches and manually review large volumes of case documents in its OPLA Case Management System to find relevant information. | The AI-enabled search capabilities will enable Office of the Principal Legal Advisor (OPLA) users to more efficiently search and extract relevant information from the OPLA Case Management System. This enhancement improves efficiency, saves time, and enables OPLA personnel to focus on higher-value tasks, ultimately supporting more effective case management. | The outputs of the AI system include generated queries and the corresponding search results. Office of the Principal Legal Advisor personnel use these outputs to review documents and records relevant to their work. The AI does not make legal judgments or case decisions; it helps users find and organize relevant documents, and attorneys remain responsible for interpreting the information and applying the law. | The outputs of the AI system include generated queries and the corresponding search results. Office of the Principal Legal Advisor personnel use these outputs to review documents and records relevant to their work. The AI does not make legal judgments or case decisions; it helps users find and organize relevant documents, and attorneys remain responsible for interpreting the information and applying the law. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2559 | Duplicate Contract Detection in Contract Tracking Application | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This use case intends to solve the problem of inefficiencies and redundancies in contract management. | The use of AI in contract management reduces the time it takes to manually inspect contracts and improves COT's ability to identify duplicative contracts. This implementation will reduce contract costs and aligns with federal procurement consolidation and cost-efficiency initiatives. | The primary outputs of the AI system are alerts and reports. When the AI detects a potential duplicate contract, it generates an alert for the user. Additionally, the AI can produce detailed reports that highlight the identified duplicates and provide relevant contract details. | The primary outputs of the AI system are alerts and reports. When the AI detects a potential duplicate contract, it generates an alert for the user. Additionally, the AI can produce detailed reports that highlight the identified duplicates and provide relevant contract details. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2581 | Semantic Search for Digital Forensics Data | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI is intended to solve the problem that traditional keyword searches and manual review make it difficult to find relevant evidence across large, mixed-format digital datasets, especially when important material does not contain the exact search terms. | This tool will help Homeland Security Investigations quickly find relevant evidence within large, complex digital datasets, reducing time spent on manual review. This supports faster, more effective investigations and better use of limited investigative resources, ultimately enhancing public safety. | Depending on the type of data ingested, the AI will output a list of items, including chat messages, emails, pictures, and videos relevant to the search query. | Depending on the type of data ingested, the AI will output a list of items, including chat messages, emails, pictures, and videos relevant to the search query. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2583 | Draft Report Generation and Formatting for Investigations | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the problem of contract managers having to manually review large numbers of contract records to spot possible duplicates, which makes it easy to miss overlapping agreements and contributes to redundant spending. | The use of AI in contract management is expected to reduce the time required for manual contract review and improve the ability to identify duplicative contracts. This implementation supports efforts to reduce contract costs and aligns with federal procurement consolidation and cost-efficiency initiatives. | The solution’s AI outputs are alerts and reports. When the AI detects a potential duplicate contract, it generates an alert for the user. The system can also produce detailed reports that highlight identified duplicates and provide relevant contract details. These outputs are intended to support contract managers and other personnel in reviewing and resolving potential duplicate contracts. Users may access detailed reports to understand the nature of the duplicates and take appropriate actions, such as consolidating contracts, renegotiating terms, or canceling redundant agreements. The resolution workflow guides users through the process of addressing duplicate contract alerts. The AI component will not itself modify, consolidate, or cancel contracts; all actions are taken by personnel following existing approval and procurement processes. | The solution’s AI outputs are alerts and reports. When the AI detects a potential duplicate contract, it generates an alert for the user. The system can also produce detailed reports that highlight identified duplicates and provide relevant contract details. These outputs are intended to support contract managers and other personnel in reviewing and resolving potential duplicate contracts. Users may access detailed reports to understand the nature of the duplicates and take appropriate actions, such as consolidating contracts, renegotiating terms, or canceling redundant agreements. The resolution workflow guides users through the process of addressing duplicate contract alerts. The AI component will not itself modify, consolidate, or cancel contracts; all actions are taken by personnel following existing approval and procurement processes. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2584 | Named Entity Resolution for Investigative Data | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the problem of investigators having to manually identify and normalize entities such as names, locations, and other selectors in large volumes of investigative text data, which is slow and makes it difficult to run accurate, consistent searches. | The benefits of using this AI include increased search accuracy, enhanced data analysis capabilities, and the ability to handle domain-specific entities more effectively. | The outputs of this AI solution are the identified and extracted entities, which investigators use to refine search results and improve investigative workflows. Homeland Security Investigations may use the solution to generate leads, with further action required by investigators and analysts as part of the full investigative process before any action is taken against an individual. | The outputs of this AI solution are the identified and extracted entities, which investigators use to refine search results and improve investigative workflows. Homeland Security Investigations may use the solution to generate leads, with further action required by investigators and analysts as part of the full investigative process before any action is taken against an individual. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2594 | ICE Enterprise AI Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This use case aims to (1) address cybersecurity concerns related to ICE personnel using externally hosted commercial chatbots and (2) solve the issue of out-of-the-box LLMs lacking tailored capabilities for ICE, such as the integration of internal ICE artifacts. | By providing an ICE-owned and agency-wide solution, the ICE Enterprise AI Assistant addresses cybersecurity and privacy concerns associated with external AI tools, while enhancing employee efficiency and supporting the ICE mission. The tool advances ICE’s AI adoption and is expected to improve information access and increase productivity across the organization. | The solution’s AI outputs vary depending on the user’s request, but generally help personnel quickly access relevant data, reducing the time spent searching through internal resources. The chatbot also improves data reliability by citing its sources and tailoring responses to the specific needs of ICE personnel. Outputs are intended as a support tool and users may not rely on outputs as the principal basis for decisions or actions classified as high-impact AI under OMB guidelines. | The solution’s AI outputs vary depending on the user’s request, but generally help personnel quickly access relevant data, reducing the time spent searching through internal resources. The chatbot also improves data reliability by citing its sources and tailoring responses to the specific needs of ICE personnel. Outputs are intended as a support tool and users may not rely on outputs as the principal basis for decisions or actions classified as high-impact AI under OMB guidelines. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2603 | ICE Terminology and Data Asset Discovery Chatbot | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | This use case intends to solve the problem of the time-consuming process of manually searching internal ICE resources for dataset information to create logical data definitions. | The chatbot is expected to help the Data Management Team quickly locate relevant SharePoint information and assist in reviewing and refining data element information. By reducing the time spent searching for information and creating logical data definitions, the chatbot enables personnel to focus on higher-value tasks. | The AI produces text responses to user queries and generates candidate logical table and column names, which Data Management Team personnel review and validate against source information and may incorporate into ICE’s internal data catalog. | The AI produces text responses to user queries and generates candidate logical table and column names, which Data Management Team personnel review and validate against source information and may incorporate into ICE’s internal data catalog. | |||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2755 | AI-Assisted eDiscovery Search | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The use case addresses inefficiencies in traditional document search and retrieval processes, such as time-consuming manual searches, the complexity of crafting SQL queries, low relevance and accuracy in search results, difficulties in digesting lengthy content, and the absence of summarization tools. These challenges hinder users' ability to efficiently access and understand critical information. | The intended purpose of this AI use case is to optimize document search, retrieval, and summarization processes by enabling users to interact with data conversationally and efficiently. The benefits include improved accuracy and relevance in search results, reduced time spent analyzing large datasets, simplified access to complex information, and enhanced decision-making through concise summaries and precise outputs. | The solution’s AI outputs include precise search results and summarized content, enabling users to quickly access information, make informed decisions, and take follow-on actions without requiring technical expertise. The tool does not make legal or case outcomes decisions; it retrieves and summarizes documents, and Office of the Principal Legal Advisor personnel remain responsible for interpreting the results and applying the law. | The solution’s AI outputs include precise search results and summarized content, enabling users to quickly access information, make informed decisions, and take follow-on actions without requiring technical expertise. The tool does not make legal or case outcomes decisions; it retrieves and summarizes documents, and Office of the Principal Legal Advisor personnel remain responsible for interpreting the results and applying the law. | |||||||||||||||||||||
| Department Of Homeland Security | MGMT | DHS-2342 | JES and Appropriations Insight | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Convert plaint text statements in Congressional reports to machine-readable tasks that can be managed in Outlook, Jira, or other project tracking software. | Purpose: Converts scanned financial tables from PDFs into structured, machine-readable data while maintaining multi-year spending relationships. Benefits: Eliminates manual data entry, reduces errors, and significantly speeds up the process of consolidating historical financial data from legacy documents. | Structured tables in place of free text to provide a machine-readable dataset. | Structured tables in place of free text to provide a machine-readable dataset. | |||||||||||||||||||||
| Department Of Homeland Security | MGMT | DHS-2343 | CFO Navigator | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Quickly and accurately accessing and understanding DHS Chief Financial Officer (CFO) data. | Provides authorized staff with an intuitive, conversational interface to query and analyze DHS CFO financial data and reports. Benefits: Democratizes access to financial information, reduces time spent searching through documents, and enables quick self-service analytics without specialized database knowledge. | On-demand information retrieval via natural language processing. | On-demand information retrieval via natural language processing. | |||||||||||||||||||||
| Department Of Homeland Security | MGMT | DHS-2406 | DHS Asset Assessment Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Reduces users’ required time, effort, preexisting knowledge of which dataset has which information, and expertise with writing functional SQL. | One of DHS’s largest expenses is their people, their facilities and their assets. The value of the Asset Assessment tool is to improve the efficiency of DHS Property Manager’s analysis and planning process by expediting and automating today’s very manual assessment of facility placement and consolidation. The tool also provides enhanced views of information that were not available previously – like the prediction of facility cost over time for multiple facilities using actual utilization data inputs. The tools provides developer time and ultimately resource cost savings from removing manual SQL development steps from the analysis process. The tool ultimately reduces users’ required time, effort, preexisting knowledge of which dataset has which information, and expertise with writing functional SQL. | Natural language processing outputs with descriptions of intermediate AI logic to accomplish cost-benefit asset assessments. When outputs include location specific information (i.e. which county an office is in or which facilities are in a given state), output include mapping of geospatial information to enhance spatial analysis and improve accuracy of decision-making. | Natural language processing outputs with descriptions of intermediate AI logic to accomplish cost-benefit asset assessments. When outputs include location specific information (i.e. which county an office is in or which facilities are in a given state), output include mapping of geospatial information to enhance spatial analysis and improve accuracy of decision-making. | |||||||||||||||||||||
| Department Of Homeland Security | MGMT | DHS-418 | Spending Analysis and Budget Execution Risk (SABER) Model | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identify account at risk for over- and under-spend. | Purpose: Predicts potential budget execution issues by analyzing historical spending patterns across various Treasury accounts and classifications. Benefits: Early identification of spending anomalies allows proactive budget management and reduces the risk of under/overspending. | Warnings, flags for review and comparison via prediction and classification. | Warnings, flags for review and comparison via prediction and classification. | |||||||||||||||||||||
| Department Of Homeland Security | OHS | DHS-2418 | MiX Phenotyping | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Syndromic surveillance | Electronic medical records can provide key data for health security insights. AI/ML makes it possible to translate these records into machine-readable data, which is the first step to finding these health security insights. AI/ML will then be used to find clinical patterns across the data, such as patterns in symptoms over time and location. AI/ML will also be used to automate the process for detecting new trends (anomalies) in these clinical patterns. These health security insights can alert about potential threats, inform messaging, and provide decision support to medical and public health partners. | The main AI system outputs will include anomaly detection for emerging trends in clinical record patterns. AI/ML outputs will also be used to find these clinical patterns by clustering, classification and topic modeling. These clinical patterns will finally be output as linear reference models that are simple enough to be interpreted and guided by human clinicians using human-in-the-loop collaboration. | The main AI system outputs will include anomaly detection for emerging trends in clinical record patterns. AI/ML outputs will also be used to find these clinical patterns by clustering, classification and topic modeling. These clinical patterns will finally be output as linear reference models that are simple enough to be interpreted and guided by human clinicians using human-in-the-loop collaboration. | |||||||||||||||||||||
| Department Of Homeland Security | OHS | DHS-2419 | MiX Indicators | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Identification of emerging health threats. | Online news and other open-source media can be used to quickly detect and respond to emerging health security threats. However, it is not practical for people to read and scan all online news stories every day for key terms. AI/ML makes it possible to read, digest and organize news stories to look for key threat terms. AI/ML will also be used to predict normal trends in news stories. This makes it possible to detect when an unusual number of health security threat terms are found in the news. This use of AI/ML helps prepare for, respond to, and protect from potential health security threats. | The main AI system output will be anomaly detection which represents two elements: 1) AI/ML predictions for usual trends in news story key terms vs. 2) the actual daily number of mentions for key terms in the news. When the actual number of mentions for key terms exceeds modeled predictions, these will be detected as anomalies. | The main AI system output will be anomaly detection which represents two elements: 1) AI/ML predictions for usual trends in news story key terms vs. 2) the actual daily number of mentions for key terms in the news. When the actual number of mentions for key terms exceeds modeled predictions, these will be detected as anomalies. | |||||||||||||||||||||
| Department Of Homeland Security | OHS | DHS-2421 | One Health Threat Detection and Risk Assessment Platform (OH-TREADS) / Planner | a) Pre-deployment – The use case is in a development or acquisition status. | Health & Medical | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Early warning, Decision Making, and Situational Awareness | Visuals will be created using a broad variety of data pulled from open-source, public, and non-public sources. AI/ML will make it possible to ingest data from a variety of digital formats, and translate it to usuable information into machine-readable data. The volume of data, and lack of clear data collection standards, requires AI to help merge streams with different data ontologies while facilitating data interoperability with mis-paired data sets. Neither activity is, or can be, practically performed by people. AI/ML will be further used in Planner to generate risk and predictive scores in addtion to displaying relevant information and analyses that help analysts understand health security threats and broadly monitor the health security landscape. AI/ML is meant to provide comprehensive situational awareness for health surveillance and public health response, with target capabilities that: enable global situational awareness of current and potential health risks from a One Health perspective; assist in early warning of health threats by location, facility, and species; aid in rapid identification of health threats at population, facility, and greater geographic resolution; and support data-driven decision making to prevent, mitigate, and respond to health threats. | The main AI system outputs will be anomaly detection following the translation of information into a structured, machine-readable datasets. AI/ML is then used for risk identification and disease prediction that are overlayed on a map and displayed with other wholistic visuals. The visuals and analytics can then be combined with critical infrastructure locations, resources, or capabilities (feeral, state, local, tribal, and territorial) to aid response and decision making. | The main AI system outputs will be anomaly detection following the translation of information into a structured, machine-readable datasets. AI/ML is then used for risk identification and disease prediction that are overlayed on a map and displayed with other wholistic visuals. The visuals and analytics can then be combined with critical infrastructure locations, resources, or capabilities (feeral, state, local, tribal, and territorial) to aid response and decision making. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2397 | OTA Automated Passenger Screening Gate System | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Increase efficiency and security of Automated Passenger Screening | The purpose is to manage passenger flow while reducing human interaction by using AI to assess body positioning and automatically initiate the screening process when the passenger is in the optimum position. Provide positive control of passengers transitioning from the Non-Sterile to the Sterile side of the checkpoint via the QPS201 On-Person Screening (OPS) system, enhancing the security of the checkpoint environment and improving efficiency. | The AI system initiates an AIT scan when the passenger is in the optimum position. Depending on the results of the AIT scan the systems interlock control unit ensures that both doors do not open simultaneously, and the settings are configurable for TSA to determine system operation. The Dormakaba V60 doors are composed of glass, reach a maximum height of approximately 3.2 feet, and can be attached to the entrance and exit of a R&S QPS201. The V60 doors open automatically following the completion of a successful scan or remain closed if the scan was unsuccessful.the passenger would be routed either for additional screening or to the re-composure area to claim their accessible property and transit into the sterile area. | The AI system initiates an AIT scan when the passenger is in the optimum position. Depending on the results of the AIT scan the systems interlock control unit ensures that both doors do not open simultaneously, and the settings are configurable for TSA to determine system operation. The Dormakaba V60 doors are composed of glass, reach a maximum height of approximately 3.2 feet, and can be attached to the entrance and exit of a R&S QPS201. The V60 doors open automatically following the completion of a successful scan or remain closed if the scan was unsuccessful.the passenger would be routed either for additional screening or to the re-composure area to claim their accessible property and transit into the sterile area. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2400 | Answer Engine | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The Transportation Security Administration (TSA) faces increasing challenges in managing and analyzing large volumes of complex data, which can hinder the effectiveness and efficiency of its security operations. Without advanced tools to streamline data processing and generate actionable insights, the TSA’s ability to respond to evolving threats and optimize operational decision-making is limited. There is a critical need for a scalable solution that can enhance data management and analysis capabilities, enabling TSA personnel to make more informed, timely, and effective security decisions. | TSA aims to enhance its capabilities in managing and analyzing complex data, ultimately contributing to more effective and efficient security operations and optimizing the TSA's operational workflows and support capabilities. | This platform is anticipated to harness the power of AI to provide intelligent, context-aware responses and insights. | This platform is anticipated to harness the power of AI to provide intelligent, context-aware responses and insights. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2428 | Contract Requirement Automation | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The AI solution is intended to solve the problem of labor-intensive manual creation and management of procurement requirement documents at TSA. The solution provides the TSA user with contextually accurate outputs (primarily in the form of automated document generation and recommendations for procurement documentation) tailored to specific requirements. The platform also provides contextual recommendations during the document creation process to ensure completeness and compliance with procurement requirements. | Eliminates the manual burden and reduces errors in document creation, where staff previously spent excessive time writing requirements to meet specific procurement needs from scratch. Increases documentation accuracy, consistency and standardization. Improves management and verification of requirement documents. Cost savings represent another major benefit, as the platform reduces labor costs associated with manual document creation and management. By streamlining the procurement process, the agency can complete more procurement actions with existing resources, maximizing taxpayer dollars. The automated tool also reduces the time spent on repetitive tasks, allowing for better resource allocation. | The tool primarily outputs automated document generation and recommendations for procurement documentation. The platform provides contextual recommendations during the document creation process to ensure completeness and compliance with procurement requirements. The tool provides recommendations and decision support to guide users through the proper documentation structure and content requirements. The system's outputs always require human verification as part of the workflow, ensuring that all generated content is reviewed and approved by qualified personnel before being finalized in the procurement process. | The tool primarily outputs automated document generation and recommendations for procurement documentation. The platform provides contextual recommendations during the document creation process to ensure completeness and compliance with procurement requirements. The tool provides recommendations and decision support to guide users through the proper documentation structure and content requirements. The system's outputs always require human verification as part of the workflow, ensuring that all generated content is reviewed and approved by qualified personnel before being finalized in the procurement process. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2429 | TSA Case Handling Platform | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The tool saves case workers manual time to download, order, and compile reports. | The tool saves case workers manual time to download reports, order, and compile. The case workers can then use their time to work on cases rather than administrative tasks. Newton POC has the potential of streamlining collection processes related to cases, and creating custom reports from various materials during the case managers interview process, producing a centralized tool to manage and control all steps within each case. | The LLM automation capabilities include compiling, ordering, and exporting a PDF document that contains 6 key documents from the initial steps of formal complaint process, and 3 key documents at the final stages of the complaint. | The LLM automation capabilities include compiling, ordering, and exporting a PDF document that contains 6 key documents from the initial steps of formal complaint process, and 3 key documents at the final stages of the complaint. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2430 | Automated Field Data Collection | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | The proliferation of screening technologies has increased the number of field data collection events necessary to characterize system performance. Currently, TSA does not have a solution to gather operational data without deploying physical teams. Addressing this gap presents an opportunity to achieve significant field efficiencies through automation and enhance wait times communication. | The AI will analyze screening environments via CCTV footage and extract passenger processing times of various steps within the screening processes. Enabling AI to extract and visualize this data will enable TSA to make data informed decisions while testing or deploying new screening equipment, identify anomalies, establish real-world rates and standards, and reduce or eliminate TSA’s need to deploy data collection teams, resulting in real-time data collection and significantly reduced computational time of findings. | AI system outputs multiple decisions to include screening location performance, rates and standards of the end-to-end screening system, and passenger wait times. | AI system outputs multiple decisions to include screening location performance, rates and standards of the end-to-end screening system, and passenger wait times. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2431 | Plan of Day Staff Optimization | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Agentic-AI | TSA must deploy a dynamic solution capable of locally optimizing allocated employee staffing while load balancing available equipment and projected passenger throughput to ensure wait times do not exceed the threshold. | Plan of Day will automate TSA screening staff optimization. | Staffing operations models prescribing when screening lanes should be opened/closed, when/where screening staff is required to absorb operational peaks, determining optimal gender and certification ratios, recommending when to schedule overtime/shift adjustments, drafting lane rotation plans, and informing national TSA staffing requirements as prescribed optimization plans deviate as airline schedules shift. | Staffing operations models prescribing when screening lanes should be opened/closed, when/where screening staff is required to absorb operational peaks, determining optimal gender and certification ratios, recommending when to schedule overtime/shift adjustments, drafting lane rotation plans, and informing national TSA staffing requirements as prescribed optimization plans deviate as airline schedules shift. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2526 | Document Translation Service (DTS) | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Leveraging accurate cross-language translation with the latest Azure AI cloud service | The DTS application allows users to upload a document in a source language, call a Text Translator API specific to the application, and then download a translated copy of their artifacts. | Translates documents to and from 100 languages and dialects while preserving document structure and data format. (See Section 10.9 TAZ Azure Service Utilization) | Translates documents to and from 100 languages and dialects while preserving document structure and data format. (See Section 10.9 TAZ Azure Service Utilization) | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2622 | Service Now Predictive Intelligence Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This solves the issue of submitting duplicate or very similar Projects, Demands and Ideas. | This tool assists in preventing wasted time in working through Projects, Demands, or Ideas that have already been processed. | The AI outputs suggestions of similar Projects, Demands and Ideas. No decisions are made. | The AI outputs suggestions of similar Projects, Demands and Ideas. No decisions are made. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2676 | Text Extraction from Uploaded Images | a) Pre-deployment – The use case is in a development or acquisition status. | Transportation | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Without this capability, images are taken during maintenance but are not searchable and the information can not be used in generating details. | By making images into text, they are searchable and can ensure data quality and maintenance ticketing information quality. | The outputs are data information. Taking images and making them into text. | The outputs are data information. Taking images and making them into text. | |||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-374 | TSA Contact Center Virtual Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI use case is designed to enhance customer service accessibility, improve operational efficiency, and provide valuable data insights to support TSA's mission and decision-making processes: 1. Limited Accessibility and Availability: The chatbot addresses the challenge of providing timely and accessible responses to public inquiries, especially outside the TSA Contact Center's operational hours. 2. Increased Demand for Customer Support: By automating responses to routine inquiries, the chatbot mitigates the strain on TSA staff caused by growing demand for customer support. 3. Resource Constraints and Fiscal Responsibility: The chatbot reduces the need for additional human resources to handle routine inquiries, thereby improving operational efficiency and fiscal responsibility. 4. Lack of Data-Driven Insights: By capturing and analyzing customer interaction data, the chatbot enables TSA to gain insights into customer needs, improve public information, and prioritize innovation and transformation efforts. 5. Consistency and Accuracy of Information: The chatbot ensures that responses to public inquiries are consistent, accurate, and aligned with the TSA's knowledge library. | The three goals for TSA's Virtual Assistant Chatbot are: 1. Enhanced Accessibility and Timeliness: By providing immediate responses both within and outside the TSA Contact Center's (TCC) operational hours, the Virtual Assistant improves ease of access for the public seeking answers to common questions. 2. Data-Driven Innovation and Transformation: The chatbot captures customer inquiries, providing valuable data to inform innovation and transformation priorities across the agency. 3. Improved Fiscal Responsibility: Leveraging automation to address increasing demand mitigates the need for additional resources, thereby enhancing fiscal responsibility. | As a predictive AI capability, the TSA's Virtual Assistant chatbot functions by correlating existing content within the TCC's knowledge library with the NLP to identify the most relevant knowledge articles for user queries. It does not generate original content. However, the system records transactional data related to customer interactions, including inputs, outputs, and topic classifications. This consistent data capture, mirroring existing email and phone channels, enables TSA to gain critical insights for customer experience improvement efforts, identify areas for public information enhancement, and understand the demand for specific services. | As a predictive AI capability, the TSA's Virtual Assistant chatbot functions by correlating existing content within the TCC's knowledge library with the NLP to identify the most relevant knowledge articles for user queries. It does not generate original content. However, the system records transactional data related to customer interactions, including inputs, outputs, and topic classifications. This consistent data capture, mirroring existing email and phone channels, enables TSA to gain critical insights for customer experience improvement efforts, identify areas for public information enhancement, and understand the demand for specific services. | |||||||||||||||||||||
| Department Of Homeland Security | USCG | DHS-2740 | Risk Management Framework (RMF) Automation | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Provides feature to help users understand complex security and privacy controls into plain language, monitoring of system/security, and generates documents such as FedRamp packages | RMF Automation will cut ATO processing time by more than half by handling repetitive tasks so compliance teams can focus on strategy and still make final decisions. It will speed up documentation and assessments, provide near real-time risk insights, and help collect and manage security evidence to demonstrate compliance. | The primary output is document generation specific to cybersecurity needs. It also can provide knowledge to user for translating and monitors/flags system breaches. | The primary output is document generation specific to cybersecurity needs. It also can provide knowledge to user for translating and monitors/flags system breaches. | |||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-2386 | Sentiment Analysis -FOD Field Offices Complaints and Reviews | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The AI solution helps USCIS Field Offices efficiently analyze and categorize large volumes of complaints by using machine learning to identify sentiment trends (positive, negative, or neutral) in public feedback. | This system indicates the positive or negative feelings people express in their feedback to the U.S. Citizenship and Immigration Services (USCIS). Survey results categorized to specify the sentiments as positive negative or neutral tone in an excel dashboard. | A graph which categorizes the data into different sentiments using databricks dashboard to see how the customer service can be improved. Does not give any recommendation or decision. | A graph which categorizes the data into different sentiments using databricks dashboard to see how the customer service can be improved. Does not give any recommendation or decision. | |||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-2599 | Private Artificial Intelligence (AI) Tech Hub (PAiTH) | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | USCIS staff currently lack access to AI tools that can accelerate routine knowledge work while maintaining the security, compliance, and privacy requirements specific to USCIS operations. Existing commercial AI solutions cannot access USCIS-specific documents and data, cannot be customized to immigration-specific workflows, and pose data security risks. Staff spend significant time on tasks that AI could assist with—such as legal research, document drafting, language translation, code generation, and regulatory compliance checks—but have no approved internal AI capability. Additionally, different organizational roles have vastly different AI assistance needs (e.g., attorneys need legal citations; developers need code; contracting officers need FAR guidance, budget officers need access to sensitive internal fiscal data), requiring a solution that adapts to the user's function rather than providing generic responses. PAiTH will also promote USCIS innovation by enabling testing of a variety of large language models and AI platforms, while informing approaches on prompt generation, cost containment and workforce literacy. | The purpose of PAiTH is to provide USCIS employees with an internal, secure AI assistant that delivers role-specific support for knowledge work tasks. The system will offer six persona-based assistants aligned to core USCIS job functions, each trained and prompted to provide relevant, accurate responses for that role's responsibilities. Intended benefits of PAiTH include: Increased Efficiency: Staff can quickly obtain research, draft language, translations, and technical guidance without manual searching through documents or external research; Role-Specific Accuracy: Persona-based responses tailored to each user's organizational function (legal, contracting, development, etc.) provide more relevant and useful outputs than generic AI; Data Security and Compliance: Internal deployment protects PII and sensitive USCIS data while maintaining compliance with federal security and privacy requirements; Controlled Access to USCIS Knowledge: AI can leverage USCIS-specific documents, policies, and data sources that are inaccessible to commercial AI tools, keeping sensitive information within USCIS boundaries; Standardization: Consistent AI-assisted workflows across the agency reduce variability in research and drafting quality; and Cost Savings: Reduces time spent on routine knowledge tasks, allowing staff to focus on complex decision-making and judgment-based work. | PAiTH will generate text-based outputs customized to the user's organizational persona: Legal Persona: Legal research summaries, statute and regulation citations (INA, CFR), case law analysis, draft legal memoranda outlines, document summaries with legal issue identification; Contracts Persona: Market research summaries, FAR/HSAR regulatory guidance, acquisition planning support, vendor comparison analyses, contract language suggestions; Language Translation Persona: Text-to-text translations between English and other languages for immigration documents and communications; Developer Persona: Code generation in various programming languages, code documentation, unit test creation, debugging suggestions, technical documentation drafts; Security Persona: Security compliance checklists, control mapping guidance, risk assessment frameworks, security documentation templates; CFO Persona: Financial research summaries, regulatory compliance guidance, budget justification drafts, data call response templates, training material summaries. All outputs will be text-based responses generated by the AI model, presented in a chat interface, and restricted to authorized USCIS personnel. Outputs will include appropriate disclaimers (e.g., "This is AI-generated research support, not legal advice" for legal persona) and accompanying policy will require human review before being used in any official decision-making, formal communications, and/or reporting. | PAiTH will generate text-based outputs customized to the user's organizational persona: Legal Persona: Legal research summaries, statute and regulation citations (INA, CFR), case law analysis, draft legal memoranda outlines, document summaries with legal issue identification; Contracts Persona: Market research summaries, FAR/HSAR regulatory guidance, acquisition planning support, vendor comparison analyses, contract language suggestions; Language Translation Persona: Text-to-text translations between English and other languages for immigration documents and communications; Developer Persona: Code generation in various programming languages, code documentation, unit test creation, debugging suggestions, technical documentation drafts; Security Persona: Security compliance checklists, control mapping guidance, risk assessment frameworks, security documentation templates; CFO Persona: Financial research summaries, regulatory compliance guidance, budget justification drafts, data call response templates, training material summaries. All outputs will be text-based responses generated by the AI model, presented in a chat interface, and restricted to authorized USCIS personnel. Outputs will include appropriate disclaimers (e.g., "This is AI-generated research support, not legal advice" for legal persona) and accompanying policy will require human review before being used in any official decision-making, formal communications, and/or reporting. | |||||||||||||||||||||
| Department Of Homeland Security | USSS | DHS-2641 | Enterprise WiFi | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Reduce the manual investigation/intervention to configure, monitor and troubleshooting issues. | Reduced time to resolve reported network issues and outages. | To automate wireless operations, improve reliability, predictability, and visibility into user experiences. Additionally, it lists the core technical features, such as spatial streams, channel bandwidth, modulation techniques, and advanced operational capabilities, ensuring clarity and relevance for technical and professional audiences. | To automate wireless operations, improve reliability, predictability, and visibility into user experiences. Additionally, it lists the core technical features, such as spatial streams, channel bandwidth, modulation techniques, and advanced operational capabilities, ensuring clarity and relevance for technical and professional audiences. | |||||||||||||||||||||
| Department Of Homeland Security | USCG | DHS-2745 | FLIR 280 HD | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | a) High-impact | High-impact | Computer Vision | Rapid detection of distressed persons in the water. | Enhanced ability to detect persons in the water to more rapidly deploy life-saving resources. | The operator views a image of the maritime domain. The AI draws boxes around suspected persons in the water to direct the operator to further investigate those anomalous detections. | 15/09/2024 | c) Developed with both contracting and in-house resources | FLIR | No | The operator views a image of the maritime domain. The AI draws boxes around suspected persons in the water to direct the operator to further investigate those anomalous detections. | A mixture of proprietary data from the vendor as well as data gathered during test events. | No | Yes | b) In-progress | There are no impacts to privacy, civil rights, or civil liberties of the public, the AI can only recognize whether or not an object is a person but cannot identify any features of that person | d) In-progress | b) Development of monitoring protocols is in-progess | Establishment of sufficient and periodic training is in-progress | a) Yes | b) Not applicable | Other | ||||||
| Department Of Homeland Security | CBP | DHS-2572 | Acoustic Signature AI for Gunshot Detection | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | b) Presumed high-impact but determined not high-impact | Not high-impact | This system detects sounds that are associated with sounds of gunfire. Only events classified as High Confidence are then sent to users as a detection. Users then review the audio of the event to determine the accuracy of the detection and to review the location of the detection on an associated map. If the audio review is deemed relevant by agent and/or system users, agents may be sent to investigate further and the message is sent to field personnel of potential gunshot activity. It may be warning of activity south of the border where notifications made enhance safety of officers/agents in the region. Any further actions taken will be based on what the agents encounter when they arrive on scene. The AI associated with this system is not a principle basis for a decision or action. | Classical/Predictive Machine Learning | The AI will be used to make confidence determinations between non gunshot acoustic activity and actual gunshot activity to reduce false alerts to agents monitoring the User Interface. | Gunshot and UAS detection notifications can be used by Agents for enhanced situational awareness in their area of operations. By utilizing this AI learning technology, they will have high confidence of alerts of a detection, with the specific type, as well as a pinpointed GPS location within 3-meters. | Agents will get a real time notification via text with a link to the location on maps as well as a link to an audio clip recording of the event. With Agent confirmation that the event was correctly identified, the AI utilizes all of the information for future cases. This real time notification grants Agents the ability to know what is happening right now and anticipate what might happen next in their environment. Increasing agent and officer safety in the geographic region. | 08/09/2025 | a) Purchased from a vendor | Invariant Corporation | No | Agents will get a real time notification via text with a link to the location on maps as well as a link to an audio clip recording of the event. With Agent confirmation that the event was correctly identified, the AI utilizes all of the information for future cases. This real time notification grants Agents the ability to know what is happening right now and anticipate what might happen next in their environment. Increasing agent and officer safety in the geographic region. | Initial training was conducted by the vendor using vendor obtained audio. Additional data was captured during deployments with other customers for further refinement. | No | No | |||||||||||||
| Department Of Homeland Security | CBP | DHS-2729 | Facial Recognition for National Security and Transnational Criminal Organizations | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI utilizes publicly available images and performs a comparative search against other open-source images to identify potential matches. Results produced by the AI can only be utilized for investigative leads and cannot be utilized as a determining factor for any enforcement action by CBP personnel. Therefore, in instances where the AI outputs assist in identifying leads, CBP personnel must then use internal CBP or USG data prior to making any law enforcement decisions or actions. The outputs of the AI do not serve as the principal basis for a high-impact decision or action. | Computer Vision | USBP works to address challenges in identifying individuals who may be associated with national security concerns or transnational criminal organizations, especially in situations where traditional identification methods are unavailable or insufficient. To support this effort, USBP utilizes facial recognition technology to generate investigative leads by providing visually similar photos of unidentified individuals. These leads serve as a starting point for analysts to conduct further investigation. No enforcement action is taken based solely on the leads generated by this tool. All potential identifications undergo thorough investigation and validation to ensure accuracy and compliance with established standards. This approach reflects USBP's commitment to responsibly leveraging technology to enhance national security and combat transnational criminal activities. | The intended purpose of this facial recognition technology is to assist USBP agents in addressing the challenges of identifying individuals who may be linked to national security threats or transnational criminal organizations. This capability enhances USBP's ability to develop investigative leads in cases where traditional identification methods are unavailable or insufficient. The benefits include improved efficiency in generating leads, enhanced support for national security and criminal investigations, and the ability to address complex threats more effectively. | The tool generates visually similar photos of individuals, which serve as preliminary leads for analysts to initiate further investigation. These outputs are not definitive identifications but are intended to assist in narrowing investigative focus. No enforcement action is permitted based solely on these leads. Every potential identification must undergo comprehensive investigation and validation by analysts to ensure accuracy and adherence to established investigative protocols. This process ensures the responsible and ethical use of the technology. | 09/10/2025 | a) Purchased from a vendor | Clearview AI | Yes | The tool generates visually similar photos of individuals, which serve as preliminary leads for analysts to initiate further investigation. These outputs are not definitive identifications but are intended to assist in narrowing investigative focus. No enforcement action is permitted based solely on these leads. Every potential identification must undergo comprehensive investigation and validation by analysts to ensure accuracy and adherence to established investigative protocols. This process ensures the responsible and ethical use of the technology. | The outputs of the AI are a "percentage match" which if above a threshold returns additional metadata from the registered person in the gallery. The AI was "tuned" by adjusting the threshold over months of testing to ensure matches that did not misidentify persons while still ensuring that various templates and pictures still returned a "real world" positive match. | Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | |||||||||||
| Department Of Homeland Security | CBP | DHS-163 | Non-Intrusive Inspection (NII) 3D Imaging Tool | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | c) Not high-impact | Not high-impact | Computer Vision | Use millimeter wave data to produce human-interpretable 3D images and cue end users to possible anomalies, helping CBP more effectively and efficiently detect contraband in imported mail at a speed that does not disrupt flow of commerce. | Utilizes AI/ML to generate high resolution, rapid imaging of objects behind occlusions; create 3D images for existing processes without significant slowdowns; and provide a novel narcotics detection capability for the inspection of packages. | Detection alerts for Items of Interest. | 01/04/2025 | c) Developed with both contracting and in-house resources | ThruWave | No | Detection alerts for Items of Interest. | Images and data of baggage inspection. | No | No | ||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2439 | Hazard Mitigation Assistance Chatbot | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | Understanding of Hazard Mitigation Assistance (HMA) programs and data analysis of HMA data. The tool will lead to a consistent and accurate understanding of the programs and contribute to a standardization of project reviews. | Leverage advanced Artificial Intelligence (AI) capabilities to address challenges faced by Hazard Mitigation Assistance (HMA) staff, enhancing productivity for day-to-day functions. | The tool utilizes the OpenAI models through Azure OpenAI to respond to prompts asked through the chatbot. | 18/08/2025 | c) Developed with both contracting and in-house resources | Ideation | Yes | The tool utilizes the OpenAI models through Azure OpenAI to respond to prompts asked through the chatbot. | Training data included all HMA policy, training, and data available on FEMA.gov. This include OpenFEMA datasets, Legal/Policy Documents (i.e., federal legal and policy references, such as nondiscrimination clauses and presidential memorandums); Regulations (i.e., regulatory texts, including sections of the Code of Federal Regulations (CFR)); and Guides and Handbooks (i.e., FEMA-issued guides that provide frameworks and instructions, such as planning handbooks and operational guides). | No | Yes | ||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2711 | Technical Resource for Mitigation Programs | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Emergency Management | Pilot | c) Not high-impact | Not high-impact | Generative AI | The FEMA Hazard Mitigation Assistance (HMA) AI solution addresses the challenge of managing complex grant processes that currently rely on manual review of thousands of applications, modifications, and closeout packages. Analysts must manually extract and reconcile data scattered across multiple nonstandardized systems (NEMIS, PARS, PDFs, spreadsheets), risking delays in obligation and closeout of grants. This fragmentation leads to inconsistent compliance determinations, increased audit risk, and inefficient use of limited staff resources. The initial document scan alone takes 45-60 human minutes per modification, with full reviews requiring 1-2 human days, significantly delaying the release of mitigation funds to communities in need. | The AI solution will deliver significant benefits to both FEMA operations and disaster-affected communities. Operationally, it will reduce document review time by 40-70%, saving 15-20 analyst hours per week to prioritize higher-value activities requiring human judgement and stakeholder interaction. The system will enhance compliance through consistent regulatory interpretation, reducing errors and improving financial calculation accuracy. The AI will identify eligibility concerns in real-time, reducing the risk of funding grants that do not align with federal laws, regulations, and executive orders._x000D_ For the public, the AI will accelerate application review, obligation and closeout of mitigation grants, enabling states, tribes, territories, and local communities to implement risk-reduction projects sooner. This faster release of funds directly enhances public safety and disaster resilience while providing a more consistent application experience across regions. | The AI system produces both machine-readable and human-readable artifacts to support grant management throughout the lifecycle. These include structured findings reports that categorize issues by scope, schedule, and budget with source citations; anomaly/discrepancy KPIs highlighting timeline gaps, invoice pattern shifts, and budget-to-scope mismatches; and compliance checklists identifying missing or non-conforming items. For documentation support, the system generates auto-drafted Requests for Information (RFIs), lock-in letters, and closeout letters with precise regulatory citations in a professional tone. It also creates CSV exports listing flagged terms and financial variances with page references. Additionally, the system provides on-demand answers to regulatory questions and prioritized worklists showing grants needing immediate action, supporting knowledge democratization and workflow optimization. | 11/07/2025 | b) Developed in-house | Yes | The AI system produces both machine-readable and human-readable artifacts to support grant management throughout the lifecycle. These include structured findings reports that categorize issues by scope, schedule, and budget with source citations; anomaly/discrepancy KPIs highlighting timeline gaps, invoice pattern shifts, and budget-to-scope mismatches; and compliance checklists identifying missing or non-conforming items. For documentation support, the system generates auto-drafted Requests for Information (RFIs), lock-in letters, and closeout letters with precise regulatory citations in a professional tone. It also creates CSV exports listing flagged terms and financial variances with page references. Additionally, the system provides on-demand answers to regulatory questions and prioritized worklists showing grants needing immediate action, supporting knowledge democratization and workflow optimization. | The FEMA model is trained, fine-tuned, and evaluated using comprehensive datasets of historical grant management records, including subaward closeout documentation, financial reconciliation data, and management cost lock-in records from previous disaster declarations across HMGP, PDM, FMA, and BRIC programs. | No | Yes | |||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2727 | Large Language Model (LLM) Guided Data Dictionary Generation | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Generative AI | This LLM-generated data dictionary eases the burden of metadata documentation on the data stewards when integrating their data into FEMADex by creating field definitions. | By utilizing this LLM-generated data dictionary, it automatically creates field definitions, saving data stewards significant time and effort during data onboarding. | This LLM model utilizes Retrieval Augmented Generation (RAG) technique for data dictionary generation by first retrieving relevant provided metadata from 1) the source system intake form and 2) an acronym key. The LLM then uses this context to generate clear, brief dscriptions for each field. | 01/09/2025 | b) Developed in-house | No | This LLM model utilizes Retrieval Augmented Generation (RAG) technique for data dictionary generation by first retrieving relevant provided metadata from 1) the source system intake form and 2) an acronym key. The LLM then uses this context to generate clear, brief dscriptions for each field. | This model utilizes metadata documentation provided by datastewards, known as the Source System Intake Form (SSIF), during source system intake for FEMADex. Additionally an acronym key for each source system is also used and provided by the respective data stewards. The SSIF and acronym key are unique for each source system. | No | Yes | |||||||||||||||
| Department Of Homeland Security | MGMT | DHS-2454 | LIGER Generative AI Toolkit | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | LIGER® for FPS will enable FPS users to employ the power of a Large Language Model (LLM) against non-public and sensitive Agency documents to save time and effort conducting time-intensive tasks, such as draft documents to include: Position Descriptions (PD), Statement of Work (SOW) for contracting actions, Professional emails and workforce announcements, Public Affairs stories and releases, and Law Enforcement operations orders; Summarization of large documents; Proofreading and providing feedback or suggestions on written work; conducting Policy analysis to include: Policy comparison and compliance verification, building textual process maps based on policy, and identifying contradictory or outdated policy; Budget forecasting and spend plan analysis; Assisting with code generation, review, and debugging; machine language translation; brainstorming ideas of projects or processes; and information/data retrieval from document libraries. The system will also be used to automate manual processes related to generation of templated documents. | FPS personnel spend considerable time creating documents from scratch or manually searching large volumes of documents to retrieve information, identify responsibilities, and find discrepancies or outdated passages within policies. Use of LIGER® is expected to substantially reduce time and associated costs in generating new draft documents such as Statements of Work, Law Enforcement operations orders, updated Position Descriptions, and other documents using previous similar examples, reviewing large volumes of information for outdated policy or discrepancies with new DHS policy or Executive Orders, and summarizing large single documents or document collections. LIGER’s ability to securely handle sensitive information and return responses based on custom document collections offers advantages through the ability to securely handle Controlled Unclassified Information (CUI) such as LES and FOUO data which is not suitable for other GenAI applications. Additionally, because LIGER® cites sources, users can rapidly find where specific information from the generated narrative can be found in source documents. | LIGER® uses Natural Language Processing (NLP) to return an easily readable text narrative. Like all GenAI, the text response should be verified for accuracy. | 26/08/2025 | a) Purchased from a vendor | LMI Consulting, LLC | No | LIGER® uses Natural Language Processing (NLP) to return an easily readable text narrative. Like all GenAI, the text response should be verified for accuracy. | LIGER currently uses the ChatGPT-4o large language model provided through OpenAI's service on the DHS Enterprise Cloud - Azure. LIGER itself is not "trained" by the data provided for use with the application and it does not provide user data to LMI or any external entity to fine-tune the components that comprise LIGER. Unlike ChatGPT, LIGER does not have "persistent memory" of inputs provided across different "chat" strings. | Yes | Yes | ||||||||||||||
| Department Of Homeland Security | CBP | DHS-185 | Babel | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Babel utilizes AI modules for text detection and translation as well as object and image recognition to provide analysts with possible matches to manually review in a single interface, versus doing multiple manual queries. The output is not singly used for action or decision making. | CBP uses this tool to conduct targeted queries to aid CBP in open source research to monitor potential threats or dangers or identify travelers who may be subject to further inspection for violation of laws CBP is authorized to enforce or administer. | Babel utilizes AI modules for Text detection and translation as well as object and image recognition to provide analysts with possible matches to manually review in a single interface versus doing multiple manual queries. The output is not singly used for action or decision making and are used to identify additional Open Source or Social Media of a person or identify additional selectors (such as phone and emails) that are previously unknown to CBP and compared by an analyst against Government systems to identify additional derogatory information. These factors can often eliminate the traveler from additional screening. | 29/08/2023 | c) Developed with both contracting and in-house resources | Babel Street | Yes | Babel utilizes AI modules for Text detection and translation as well as object and image recognition to provide analysts with possible matches to manually review in a single interface versus doing multiple manual queries. The output is not singly used for action or decision making and are used to identify additional Open Source or Social Media of a person or identify additional selectors (such as phone and emails) that are previously unknown to CBP and compared by an analyst against Government systems to identify additional derogatory information. These factors can often eliminate the traveler from additional screening. | Babel uses proprietary data, public datasets, and machine-labeled datasets to train its NLP and matching models. Evaluation data includes human-annotated datasets, precision, recall, and F1 score assessments, and customer-provided labeled name pairs for tuning. All returned results are carefully reviewed, and no sensitive CBP-owned data is involved in the process | Yes | https://www.dhs.gov/publication/dhscbppia-058-publicly-available-social-media-monitoring-and-situational-awareness | No | a) Yes | https://www.dhs.gov/publication/dhscbppia-058-publicly-available-social-media-monitoring-and-situational-awareness | Potential for inaccurate translation or translation missing context due to local variant dialects or slang that are not captured by the vendor which can be mitigated through coordination with CBP employees who are from that country/area, foreign language certified, and can be consulted for clarification. | d) In-progress | b) Development of monitoring protocols is in-progess | Establishment of sufficient and periodic training is in-progress | c) In-progress | a) Yes, an appropriate appeal process has been established | Other | ||||
| Department Of Homeland Security | CBP | DHS-2380 | Passive Body Scanner | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Identify anomalies in body heat, assisting CBP officers to detect concealed weapons and contraband, allowing for efficient processing of travelers | PBS is intended to enhance situational awareness in pedestrian traveler processing to aid CBP officers in observing potentially dangerous objects or contraband in a timely manner and pursuant to CBP’s border search authority. | This algorithm highlights areas on a person where potential objects may be blocking the subject's expected body heat and displays these areas on live video image, monitored by a CBP officer. The highlighted areas may show the locations of carried objects, which could be potential weapons or contraband. | 29/09/2023 | c) Developed with both contracting and in-house resources | ThruVision TAC 16 | Yes | This algorithm highlights areas on a person where potential objects may be blocking the subject's expected body heat and displays these areas on live video image, monitored by a CBP officer. The highlighted areas may show the locations of carried objects, which could be potential weapons or contraband. | All data used in training, validation, test and evaluation of the AI is Thruvision proprietary – no data from any external sources (including the Agency) is used. Approximately 25,000 images have been used for training the DynamicDETECT model. These training images were extracted from staged screening events recorded with Thruvision cameras using various actors, concealment items, item locations and clothing. | No | https://www.dhs.gov/sites/default/files/publications/privacy-pia-cbp-017a-niisystemsprogrampedestriandetectionatrange-october2021.pdf | Yes | a) Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-cbp-017a-niisystemsprogrampedestriandetectionatrange-october2021.pdf | Negligible impacts to traveler safety - PBS uses passive means (thermal imaging, no radiation emitted) to “look’ for contraband or weapons on a traveler. If the PBS operator sees a weapon (either with or without the PBS), they will seek supervisor approval to conduct a pat down and initiate a secondary referral. If they see contraband, they may direct the traveler to stop and notify a supervisor, who will decide if the image or other factors meet the threshold to conduct a pat down. | d) In-progress | b) Development of monitoring protocols is in-progess | Establishment of sufficient and periodic training is in-progress | c) In-progress | b) Not applicable | Other | ||||
| Department Of Homeland Security | CBP | DHS-2388 | CBP Translate | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Assist officers and agents with immediate interpretation needs when human translators are not available. | CBP Translate enhances efficiency by expediting questioning when immediate interpretation is needed. It ensures clear communication, minimizes misunderstandings, and offers immediate accessibility via mobile and web platforms. This improves operational flexibility and creates a smoother experience for travelers. | The outputs of CBP Translate include translated text or audio in the form of chat bubbles, which store each interaction. Additionally, CBPOs can capture images of non-travel documents for text translation, but images of actual travel documents are not taken. | 07/08/2019 | c) Developed with both contracting and in-house resources | Aneesh Technologies, 24X7, Ellumen Inc., Deloitte, NiyamIT | Yes | The outputs of CBP Translate include translated text or audio in the form of chat bubbles, which store each interaction. Additionally, CBPOs can capture images of non-travel documents for text translation, but images of actual travel documents are not taken. | The models are trained using examples of translated sentences and documents, which are typically collected from the public web. A data miner that focuses more on precision than recall is used, which allows the collection of higher quality training data from the public web. | Yes | https://www.dhs.gov/publication/dhscbppia-069-cbp-translate-application | Yes | a) Yes | https://www.dhs.gov/publication/dhscbppia-069-cbp-translate-application | The key risks would be the programs inability to accurately translate what was spoken by both sides of the conversation, leading to significant delays in emergency response situations when trying to leverage traditional phone based translation services in areas with limited cell phone reception. Inaccuracy may also lead to longer processing times at Ports of Entry. These were identified via feedback from the end-users and a common understanding regarding LLM language translation models. | d) In-progress | b) Development of monitoring protocols is in-progess | Yes, sufficient and periodic training has been established | b) Not applicable | b) Not applicable | Direct usability testing | ||||
| Department Of Homeland Security | CBP | DHS-2389 | Passenger Security Assessment Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | This model aids CBP to efficiently identify security risks, especially related to narcotics interdiction, by providing real-time risk assessments that overcome limitations of traditional, time-intensive methods. The model solves the problem by providing CBP personnel with real-time risk assessments and actionable recommendations integrated into existing systems. By analyzing data not typically accessible during initial processing, the model enhances the ability to detect smuggling indicators and prioritize high-risk individuals or vehicles for further inspection. This improves the efficiency and effectiveness of border security operations, enabling CBP to better safeguard the nation while maintaining the flow of legitimate travel and trade. | This model is designed to support CBP personnel in quickly recognizing crossings that may warrant additional scrutiny, thereby enhancing border security and safety. | The outputs include risk assessments and recommendations, which are integrated into existing passenger processing and threat targeting systems, such as the Automated Targeting System (ATS). These notifications equip CBP personnel with actionable insights to address potential security concerns in real-time. | 01/04/2013 | b) Developed in-house | Yes | The outputs include risk assessments and recommendations, which are integrated into existing passenger processing and threat targeting systems, such as the Automated Targeting System (ATS). These notifications equip CBP personnel with actionable insights to address potential security concerns in real-time. | This model leverages data housed within the Automated Targeting System (ATS) Unified Passenger (UPAX). | Yes | https://www.dhs.gov/publication/automated-targeting-system-ats-update https://www.govinfo.gov/content/pkg/FR-2012-05-22/html/2012-12396.htm | Sex/Gender, Age | Yes | a) Yes | https://www.dhs.gov/publication/automated-targeting-system-ats-update https://www.govinfo.gov/content/pkg/FR-2012-05-22/html/2012-12396.htm | Risks include false positives and negatives, which could result in delays for travelers, failure to detect narcotics smuggling, or missed detections; algorithmic bias may disproportionately target certain types of travelers and crossing behaviors (related to model training using historical seizures); and ongoing challenge of traffickers adapting their methods to evade detection. These risks have been identified through research, real-world applications, and expert analyses. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Other | ||||
| Department Of Homeland Security | CBP | DHS-2390 | Cargo Security Assessment Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | This use case addresses the challenge of efficiently identifying and mitigating risks associated with cargo shipments entering the United States. With the high volume of shipments processed daily at ports of entry, it is essential to detect potentially high-risk shipments, such as those that may pose security threats, without causing delays to legitimate trade and commerce. This use case uses advanced data analytics and machine learning to enhance the ability to evaluate and prioritize shipments for further review, ensuring that flagged cargo is inspected appropriately while maintaining efficient cargo processing operations. | AI/ML Models identify high risk shipments to aid CBP officers in detecting narcotics smuggling threats, identifying candidate shipments for review and referral for inspection at CBP Ports of Entry (POEs). | High risk model results are returned to users as a system rule hit. These rule hits are viewable in the associated system results window. From this window, CBP operational personnel review and assess result for next action, including possible shipment examination. | 01/12/2011 | b) Developed in-house | Yes | High risk model results are returned to users as a system rule hit. These rule hits are viewable in the associated system results window. From this window, CBP operational personnel review and assess result for next action, including possible shipment examination. | This model leverages data provided by carriers within the Automated Commercial Environment (ACE), as well as transformations of that data within the Automated Targeting System (ATS). | Yes | https://www.dhs.gov/publication/automated-targeting-system-ats-update https://www.govinfo.gov/content/pkg/FR-2012-05-22/html/2012-12396.htm | Yes | a) Yes | https://www.dhs.gov/publication/automated-targeting-system-ats-update https://www.govinfo.gov/content/pkg/FR-2012-05-22/html/2012-12396.htm | Risks include false positives and negatives, which could lead to unnecessary inspections or missed detections; bias in algorithms that may disproportionately target certain importers; and ongoing challenge of traffickers adapting their methods to evade detection. These risks have been identified through research, real-world applications, and expert analyses. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | d) Law, operational limitations, or governmentwide guidance precludes an opportunity for an individual to appeal | Other | |||||
| Department Of Homeland Security | CBP | DHS-2391 | Illicit Trade | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The use case is designed to improve the identification and prioritization of high-risk inbound cargo shipments that may violate trade regulations. Using advanced AI and machine learning models, the system enhances risk assessment processes, helping CBP personnel more effectively detect suspicious shipments and potential compliance issues. By analyzing historical data, risk attributes, and employing predictive modeling, the AI supports CBP in streamlining enforcement actions and improving the accuracy of targeting shipments for additional review and screening. This approach helps optimize resource allocation and strengthens CBP's ability to enforce trade regulations efficiently. | The model identifies high-risk shipments to support CBP personnel in managing their workload associated with detecting threats and selecting candidate shipments for review and additional screening. | The model results are sent to the Automated Targeting System for review and assessment by operational personnel, who may conduct additional screening if necessary. | 25/07/2023 | b) Developed in-house | Yes | The model results are sent to the Automated Targeting System for review and assessment by operational personnel, who may conduct additional screening if necessary. | This model leverages data provided by carriers within the Automated Commercial Environment (ACE), as well as transformations of that data within the Automated Targeting System (ATS). | Yes | https://www.dhs.gov/publication/automated-targeting-system-ats-update https://www.govinfo.gov/content/pkg/FR-2012-05-22/html/2012-12396.htm | Yes | a) Yes | https://www.dhs.gov/publication/automated-targeting-system-ats-update https://www.govinfo.gov/content/pkg/FR-2012-05-22/html/2012-12396.htm | Risks include false positives and negatives, which could lead to unnecessary inspections or missed detections; and bias in algorithms that may disproportionally target certain importers. These risks have been identified through research, real-world applications, and expert analyses. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Other | |||||
| Department Of Homeland Security | CBP | DHS-2412 | Supervised Traveler Identity Verification Services (Officer Initiated) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Biometric Entry processing fulfills a Congressional mandate. | The TVS Biometric matching service is a cloud-based facial biometric matching service that enables CBP to match a passenger’s identity against a trusted source, throughout the travel continuum which improves traveler facilitation and reduces manual identity verification | Leverages DHS facial matching technologies to provide a match or no match response | 01/09/2017 | c) Developed with both contracting and in-house resources | CBP procured mobile devices (Apple, Samsung) commercial off the shelf cameras (Logitech) | Yes | Leverages DHS facial matching technologies to provide a match or no match response | Border Crossing Information. | Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service; https://www.federalregister.gov/documents/2016/12/13/2016-29898/privacy-act-of-1974-department-of-homeland-securityus-customs-and-border-protection-007-border | Yes | a) Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service; https://www.federalregister.gov/documents/2016/12/13/2016-29898/privacy-act-of-1974-department-of-homeland-securityus-customs-and-border-protection-007-border | The key risk is the degradation of the TVS verification to degrade overtime based on the parameters of assessment for comparing images to templates. The facial recognition does not enter or retrieve data, it is only comparative. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | General solicitations of feedback and comments from the public, Direct usability testing | ||||
| Department Of Homeland Security | CBP | DHS-2413 | Semi-Supervised Traveler Identity Verification Services (Traveler Initiated) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Biometrically processes travelers on entry. | The TVS Biometric matching service is a cloud-based facial biometric matching service that enables CBP to match a passenger’s identity against a trusted source, throughout the travel continuum which improves traveler facilitation and reduces manual identity verification. | Leverages TVS facial matching technologies to provide a match or no match response | 01/02/2024 | c) Developed with both contracting and in-house resources | NEC | Yes | Leverages TVS facial matching technologies to provide a match or no match response | Trusted Traveler Information. | Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | Yes | a) Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | The key risk is the degradation of the TVS verification to degrade overtime based on the parameters of assessment for comparing images to templates. The facial recognition does not enter or retrieve data, it is only comparative. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | General solicitations of feedback and comments from the public, Direct usability testing | ||||
| Department Of Homeland Security | CBP | DHS-2414 | 3rd Party Traveler Identity Verification Services | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Biometric Exit processing fulfills a Congressional mandate. | TVS is a cloud-based facial biometric matching service that enables CBP, External Partners, and Other Government Agencies (OGA) to match a passenger’s identity against a trusted source, throughout the travel continuum, which improves traveler facilitation and reduces manual identity verification. | Leverages DHS facial matching technologies to provide a match or no match response. | 01/05/2017 | c) Developed with both contracting and in-house resources | NEC | Yes | Leverages DHS facial matching technologies to provide a match or no match response. | Border Crossing Information. | Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | Yes | a) Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | The key risk is the degradation of the TVS verification to degrade overtime based on the parameters of assessment for comparing images to templates. The facial recognition does not enter or retrieve data, it is only comparative. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | General solicitations of feedback and comments from the public, Direct usability testing | ||||
| Department Of Homeland Security | CBP | DHS-2415 | Traveler Self-Service Mobile Identity Verification | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The use of the Traveler Verification Service (TVS) in these use cases enables biometric identity verification and facilitates travel. | The TVS Biometric matching service is a cloud-based facial biometric matching service that enables CBP to match a passenger’s identity against a trusted source, throughout the travel continuum which improves traveler facilitation and reduces manual identity verification | Leverages DHS facial matching technologies to provide a match or no match response. | 01/01/2022 | c) Developed with both contracting and in-house resources | NEC (algorithm only) | Yes | Leverages DHS facial matching technologies to provide a match or no match response. | Vetting/Border Crossing Information/ Trusted Traveler Information | Yes | https://www.dhs.gov/publication/electronic-system-travel-authorization ; https://www.dhs.gov/publication/dhscbppia-051-automated-passport-control-apc-and-mobile-passport-control-mpc ; https://www.dhs.gov/publication/global-enrollment-system-ges | Yes | a) Yes | https://www.dhs.gov/publication/electronic-system-travel-authorization ; https://www.dhs.gov/publication/dhscbppia-051-automated-passport-control-apc-and-mobile-passport-control-mpc ; https://www.dhs.gov/publication/global-enrollment-system-ges | The key risk is the degradation of the TVS verification to degrade overtime based on the parameters of assessment for comparing images to templates. The facial recognition does not enter or retrieve data, it is only comparative. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | a) Yes, an appropriate appeal process has been established | General solicitations of feedback and comments from the public | ||||
| Department Of Homeland Security | CBP | DHS-2416 | Traveler Identity Verification Services (Vetting) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The use of the Traveler Verification Service (TVS) enables CBP to enhance the identification of possible threats by leveraging facial recognition technology to identify biometric matches to derogatory records that are not identified through existing biographic targeting and entity resolution mechanisms. | CBP's Traveler Identity Verification Services (Vetting) utilizes facial recognition technology to enhance threat identification by matching travelers' biometrics against records of concern. | When the system identifies a potential match to concerning records, CBP personnel conducts a manual facial comparison to determine whether the record is likely associated with the individual. | 01/12/2018 | c) Developed with both contracting and in-house resources | NEC | Yes | When the system identifies a potential match to concerning records, CBP personnel conducts a manual facial comparison to determine whether the record is likely associated with the individual. | Border Crossing Information. | Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | Yes | a) Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | The key risk is the degradation of the TVS verification to degrade overtime based on the parameters of assessment for comparing images to templates. The facial recognition does not enter or retrieve data, it is only comparative. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Direct usability testing | ||||
| Department Of Homeland Security | CBP | DHS-2538 | Open Source and Social Media Analysis | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | The AI is intended to solve the problem of efficiently identifying potential threats and admissibility concerns by quickly analyzing vast amounts of open-source and social media data for security risks to enhance U.S. national security. This tool then presents information to a CBP Officer/analyst for manual review, verification and validation for violations of Title 8 and Title 19 or other laws that CBP is sworn to enforce. The output is not used as the sole basis for action or decision making. | CBP uses this tool to conduct targeted queries to aid CBP in open-source research to monitor potential threats or dangers or identify travelers who may be subject to further inspection for violation of laws CBP is authorized to enforce or administer. | This tool utilizes AI modules for Text detection and translation as well as object and image recognition to provide analysts with possible matches to manually review in a single interface versus doing multiple manual queries. The output is not solely used for action or decision making and are used to identify additional Open Source or Social Media of a person or identify additional selectors (such as phone and emails) that are previously unknown to CBP and compared by an analyst against Government systems to identify additional derogatory information. | 01/01/2025 | a) Purchased from a vendor | NexisXplore | No | This tool utilizes AI modules for Text detection and translation as well as object and image recognition to provide analysts with possible matches to manually review in a single interface versus doing multiple manual queries. The output is not solely used for action or decision making and are used to identify additional Open Source or Social Media of a person or identify additional selectors (such as phone and emails) that are previously unknown to CBP and compared by an analyst against Government systems to identify additional derogatory information. | Training data was collected from several publicly available, social media, and media outlet sites. This approach ensured the model was trained across several different groups representing an array of possible language types and vernaculars so as not to cause bias toward a specific demographic. Along with the above open-source data, the vendor leverages a mix of proprietary data to ensure the data is representative of real-world conditions and context. | No | https://www.dhs.gov/publication/dhscbppia-058-publicly-available-social-media-monitoring-and-situational-awareness | No | b) In-progress | https://www.dhs.gov/publication/dhscbppia-058-publicly-available-social-media-monitoring-and-situational-awareness | AI could potentially mis-label an object however all results are reviewed by a law enforcement officer and OSINT results are only one section of data among many when reviewing admissibility. | d) In-progress | b) Development of monitoring protocols is in-progess | Establishment of sufficient and periodic training is in-progress | b) Not applicable | a) Yes, an appropriate appeal process has been established | In-progress | ||||
| Department Of Homeland Security | CBP | DHS-2561 | Cryptocurrency Analysis | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Identification of transactions that may have been made with a designated entity, company, or location that may be using crypto to circumvent financial reporting. This tool will help CBP identify business and travelers who may be using cryptocurrency to conceal illicit transactions not reported within the U.S. financial system. | Quicker identification of risks related to cryptocurrency accounts to aid in addressing potential admissibility concerns (in lieu of completely manual research of the same accounts). | Highlighting transactions (labeling them as at-risk) that may have been made with a designated entity, company, or location that may be using crypto to circumvent financial reporting. Use of dark web marketplaces, illicit funding sites, or other financial activity that would normally be flagged by a bank or other financial institution in accordance with US law. | 01/01/2025 | a) Purchased from a vendor | TRM Labs | No | Highlighting transactions (labeling them as at-risk) that may have been made with a designated entity, company, or location that may be using crypto to circumvent financial reporting. Use of dark web marketplaces, illicit funding sites, or other financial activity that would normally be flagged by a bank or other financial institution in accordance with US law. | Training data was collected from several publicly available, social media, and media outlet sites. This approach ensured the model was trained across several different groups representing an array of possible language types and vernaculars so as not to cause bias toward a specific demographic. Along with the above open-source data, the vendor leverages a mix of proprietary data to ensure the data is representative of real-world conditions and context. | No | https://www.dhs.gov/publication/dhscbppia-058-publicly-available-social-media-monitoring-and-situational-awareness | No | b) In-progress | https://www.dhs.gov/publication/dhscbppia-058-publicly-available-social-media-monitoring-and-situational-awareness | Identification of US Government crypto wallets that are being utilized for criminal investigations which are participating in transactions on illicit marketplaces. | d) In-progress | b) Development of monitoring protocols is in-progess | Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | In-progress | ||||
| Department Of Homeland Security | CBP | DHS-2570 | Traffic Jam | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Access to a robust data source of potential runaways, missing children, and sex trafficking victims alongside the associated analytics tools to support CBP in our mission to protect the most vulnerable populations. | Traffic Jam leverages AI for facial recognition purposes and to draw together similar data points from a large pool of data in support of human trafficking and exploitation investigations. Manually combing through available data takes an extensive amount of time and resources, with often less than desirable outcomes. The AI solution in Traffic Jam is able to quickly identify the most relevant information and possible matching for immediate review, allowing officers and analysts to focus limited time in a high-tempo operational environment in the most effective manner possible. | The AI system is only used for the purposes of returning possible matches to query criteria within the Traffic Jam system. The data submitted by the end user is not used to train or refine the AI model, submitted images are cached for 2 hours to facilitate user support but nothing is retained permanently in the Traffic Jam database. Additionally, the information is reported back to CBP officers and agents to review for further research and determination of next steps. | 29/09/2025 | a) Purchased from a vendor | Marinus Analytics | Yes | The AI system is only used for the purposes of returning possible matches to query criteria within the Traffic Jam system. The data submitted by the end user is not used to train or refine the AI model, submitted images are cached for 2 hours to facilitate user support but nothing is retained permanently in the Traffic Jam database. Additionally, the information is reported back to CBP officers and agents to review for further research and determination of next steps. | Proprietary, public, and machine labeled datasets including structured and unstructured data such as online ads, research datasets, images, and geospatial information. All data used for model training and testing is either publicly available, lawfully obtained, or provided by partner agencies under appropriate agreements, and is processed in compliance with privacy and security standards. | Yes | No | a) Yes | Using Traffic Jam or similar commercial facial recognition vendors introduces additional privacy and civil liberties risks, particularly around data control, transparency, and accountability. Mitigation steps are taken as outlined in the Privacy Impact Assessment to minimize or eliminate the impacts of using artificial intelligence. | d) In-progress | b) Development of monitoring protocols is in-progess | Establishment of sufficient and periodic training is in-progress | c) In-progress | a) Yes, an appropriate appeal process has been established | Other | ||||||
| Department Of Homeland Security | CBP | DHS-2619 | CBP Link | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | CBP Link utilizes liveness to ensure the photo that TVS it utilizing to conduct facial matching to CBP holdings is of a live person and not a photo of a 2-D image and well as validate the presence of a specific user when collecting geolocation to validate that person is in the required location. | CBP Link uses liveness detection and TVS uses facial recognition to compare live or uploaded images with CBP's database, enabling real-time identity verification. This automation streamlines border processes, enhances accuracy, and reduces fraud. | CBP Link outputs include identity match confirmation, fraud alerts, and traveler status updates for clearance in processes like boarding or border crossing. | 16/06/2025 | c) Developed with both contracting and in-house resources | IProov | Yes | CBP Link outputs include identity match confirmation, fraud alerts, and traveler status updates for clearance in processes like boarding or border crossing. | CBP Link submission information. | Yes | https://www.dhs.gov/sites/default/files/2025-06/25_0611_priv_pia-cbp-083-cbplink.pdf | Yes | a) Yes | https://www.dhs.gov/sites/default/files/2025-06/25_0611_priv_pia-cbp-083-cbplink.pdf | False negative facial match or liveness detection - potential impact of a user not being able to provide proof of departure. As an alternative, the user can provide proof of departure utilizing any of the means as described here: https://i94.cbp.dhs.gov/home | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Direct usability testing | ||||
| Department Of Homeland Security | CBP | DHS-2669 | Land Border Integration | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The Land Border Integration system addresses the need for efficient and accurate data capture and processing during vehicle inspections at land-border Ports of Entry (PoEs). The system is designed to detect and interpret alphanumeric values from license plates captured by cameras, integrating with License Plate Readers (LPR) to classify license plate numbers, their country, and state of origin. Additionally, it leverages facial recognition technologies to analyze vehicle occupants, enhancing situational awareness for border officers. By utilizing artificial intelligence (AI) technologies and edge-based processing, the system minimizes reliance on centralized systems and enables real-time video stream analysis. This ensures timely and actionable insights, allowing officers to vet travelers more efficiently without manually entering information. The solution supports critical operations by improving the speed and accuracy of data processing, enhancing operational effectiveness, and streamlining the inspection process. | The intended purpose of the Land Border Integration system is to enhance the efficiency and accuracy of data processing during vehicle inspections at land-border Ports of Entry (PoEs). By leveraging artificial intelligence (AI) technologies, the system captures and interprets visual data, including license plate information, vehicle classification (make, model, and color), and occupant identification through facial recognition, in real time. This edge-based AI solution supports situational awareness and operational decision-making by providing timely and actionable insights to officers. It minimizes reliance on centralized systems, streamlines the vetting process, and reduces the need for manual data entry, enabling officers to focus on critical tasks. The system improves operational effectiveness, enhances border security, and facilitates efficient traveler processing. | The Land Border Integration system generates alphanumeric values extracted from license plates, along with additional outputs such as the detected vehicle's make, model, color, and license plate origin (country and state). These outputs are ingested and processed to support law enforcement activities during cross-border inspections. The system also provides facial recognition data to analyze vehicle occupants, further enhancing situational awareness. These outputs are presented to booth officers, who can accept or correct the AI-generated information. By delivering actionable data in real time, the system improves operational efficiency, streamlines the inspection process, and supports informed decision-making during border security operations. | 12/04/2021 | a) Purchased from a vendor | Rekor | Yes | The Land Border Integration system generates alphanumeric values extracted from license plates, along with additional outputs such as the detected vehicle's make, model, color, and license plate origin (country and state). These outputs are ingested and processed to support law enforcement activities during cross-border inspections. The system also provides facial recognition data to analyze vehicle occupants, further enhancing situational awareness. These outputs are presented to booth officers, who can accept or correct the AI-generated information. By delivering actionable data in real time, the system improves operational efficiency, streamlines the inspection process, and supports informed decision-making during border security operations. | In support of production performance metrics for device health purposes, LBI uses both license plate and RFID read metrics to evaluate device health states. For example, identifying breakage of devices when there are missed LPR or RFID reads. In the future, LBI intends to use license plate reads from production Ports of Entry (POE) to support training. | Yes | No | a) Yes | Potential impact to an individual or entity's civil liberties or privacy. | b) Yes – by an agency AI oversight board not directly involved in the AI’s development | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Other | ||||||
| Department Of Homeland Security | CBP | DHS-2731 | Mobile Fortify | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The "Mobile Fortify" application utilizes CBP's facial comparison or DHS's fingerprint matching to quickly verify subjects of interest during operations. | Utilizing facial comparison or fingerprint matching services, agents/officers in the field are able to quickly verify identity utilizing trusted source photos. | The mobile application will display either a no-match indicator or match with biographic information back to the agent/officer. | 01/05/2025 | a) Purchased from a vendor | NEC | Yes | The mobile application will display either a no-match indicator or match with biographic information back to the agent/officer. | Vetting/Border Crossing Information/ Trusted Traveler Information | Yes | Yes | a) Yes | The key risk is the degradation of CBP's facial comparison verification to degrade overtime based on the parameters of assessment for comparing images to templates. The facial recognition does not enter or retrieve data, it is only comparative. | d) In-progress | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Other, Direct usability testing | ||||||
| Department Of Homeland Security | CBP | DHS-315 | ERNIE | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | ERNIE is used to analyze Radiation Portal Monitor (RPM) data to enhance the detection of radioactive materials. It provides a more efficient review of stream of commerce radiation portal monitor data and provides real-time risk assessments and alerts for potential threats | The model enhances threat detection and prioritizes high-risk targets, improving operational efficiency and national security. | The model provides real-time risk assessments and alerts for potential threats detected by the Radiation Portal Monitors. It also provides prioritized recommendations for further screening based on the analysis of radiation data. | 01/10/2017 | c) Developed with both contracting and in-house resources | Countering Weapons of Mass Destruction, Department of Homeand Security | Yes | The model provides real-time risk assessments and alerts for potential threats detected by the Radiation Portal Monitors. It also provides prioritized recommendations for further screening based on the analysis of radiation data. | Numerical data from RPM radiation detectors and ERNIE assessments. | No | Yes | a) Yes | Safety risk is when ERNIE would not identify a radiation threat. In case ERNIE cannot take decision the system falls back to the default deterministic algorithm. Were there to be a pervasive failure of the system such that no indication was provided at all, the procedure is in place for the officer to perform a manual scan. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Other | ||||||
| Department Of Homeland Security | CBP | DHS-398 | Unified Processing/Mobile Intake | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The need to use every available resource to identify individuals who may pose a threat to national security or may be members of terrorism or transnational criminal organizations. | The objective is to facilitate the swift and accurate biometric identification of individuals encountered by CBP, thereby expediting their processing. | CBP utilizes facial matching technologies to verify identity. This process compares an individual's live photo against existing government photo holdings to confirm identity. | 01/03/2022 | c) Developed with both contracting and in-house resources | NEC (Nippon Electric Company) | Yes | CBP utilizes facial matching technologies to verify identity. This process compares an individual's live photo against existing government photo holdings to confirm identity. | Border Crossing Information | Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | Yes | a) Yes | https://www.dhs.gov/publication/dhscbppia-056-traveler-verification-service | The key risk is the degradation of the TVS verification to degrade overtime based on the parameters of assessment for comparing images to templates. The facial recognition does not enter or retrieve data, it is only comparative. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Direct usability testing, Other | ||||
| Department Of Homeland Security | CBP | DHS-80 | Traveler Entity Resolution | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Traveler Entity Resolution AI/ML models aim to improve both security and operational efficiency by focusing on individuals who may present higher risks, by improving the certainty of traveler record matches to assist CBP personnel in identifying suspicious travelers for follow-on action. | To enhance the efficiency and effectiveness of screening passengers for potential security risks. The AI model assess traveler data such as travel patterns and historical records assisting CBP personnel to prioritize higher-risk individuals for further screening, streamlining the vetting process, allowing CBP personnel to focus resources on the most high-risk travelers, thereby improving border security while reducing the burden of manual screening. | The outputs are integrated into the Automated Targeting System (ATS), which generates notifications to recommend further inspection or follow-up actions. These recommendations assist CBP personnel in making real-time decisions about which travelers to prioritized for further screening. CBP personnel retain the final authority in the decision-making process, ensuring that human judgment remains central to border security operations. | 01/12/2012 | b) Developed in-house | Yes | The outputs are integrated into the Automated Targeting System (ATS), which generates notifications to recommend further inspection or follow-up actions. These recommendations assist CBP personnel in making real-time decisions about which travelers to prioritized for further screening. CBP personnel retain the final authority in the decision-making process, ensuring that human judgment remains central to border security operations. | This model leverages data housed within the Automated Targeting System (ATS) Unified Passenger (UPAX). | Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | Sex/Gender, Age | Yes | a) Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | Risks include false positives and negatives, which could result in delays for travelers, failure to detect narcotics smuggling, or missed detection; algorithmic bias may disproportionately target certain types of travelers and crossing behaviors (related to model training using historical seizures); and ongoing challenge of traffickers adapting their methods to evade detection. These risks have been identified through research, real-world applications, and expert analyses. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | d) Law, operational limitations, or governmentwide guidance precludes an opportunity for an individual to appeal | Other | ||||
| Department Of Homeland Security | DHS | DHS-365 | Consular Consolidated Database (CCD) Facial Recognition (FR) On Demand Report (VISA Only) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | a) High-impact | High-impact | Computer Vision | On-demand facial recognition services. | DHS Components use the Facial Recognition (FR) on Demand report (Visa only) to combat fraud by benefit applicants whose fingerprints are not in IDENT but who may have photos in the Department of State's (DoS) Consular Consolidated Database (CCD) that predate the fingerprinting of visa applicants. | Facial Recognition and other biometric checks and reports. | 01/04/2019 | a) Purchased from a vendor | Department of State | No | Facial Recognition and other biometric checks and reports. | Information stored within the Consular Consolidated Database. | Yes | Race/Ethnicity, Sex/Gender, Age | No | b) In-progress | Potential mismatch of face images and/or bias based on demographic data held by DoS. | d) In-progress | b) Development of monitoring protocols is in-progess | Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | In-progress | |||||
| Department Of Homeland Security | ICE | DHS-2408 | Hurricane Score | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | This use case intends to solve the problem of understanding which noncitizens in Enforcement and Removal Operations’ Alternatives to Detention - Intensive Supervision Appearance Program are most likely to abscond. | The Hurricane Score helps officers quickly evaluate substantial amounts of case information across thousands of Alternatives to Detention - Intensive Supervision Appearance Program participants. By surfacing a risk indicator based on observed absconding patterns, it can provide additional insight that might not be apparent from manual review alone. This supports more consistent and efficient case reviews and helps officers allocate case management resources more effectively while maintaining individualized assessments. | Once individuals are enrolled in Enforcement and Removal Operations’ Alternatives to Detention - Intensive Supervision Appearance Program (ATD-ISAP), officers periodically review each case to determine whether the current level of case management and technology assignment remains appropriate or should be adjusted. During case reviews, an analyst or officer provides the Hurricane Score model with information already known about an ATD-ISAP participant, including case management details and participant actions. The model is a quasi-binomial, binary classification machine learning (ML) model trained on inactive ATD-ISAP case data to identify patterns associated with prior absconding behavior. Based on the provided inputs, the model outputs a score from 1 to 5, with higher scores indicating a higher model-estimated risk that the individual may abscond. Officers may then consider this score, along with many other factors, when determining whether current levels of case management or technology assignment remain appropriate or should be adjusted. | 01/02/2019 | b) Developed in-house | No | Once individuals are enrolled in Enforcement and Removal Operations’ Alternatives to Detention - Intensive Supervision Appearance Program (ATD-ISAP), officers periodically review each case to determine whether the current level of case management and technology assignment remains appropriate or should be adjusted. During case reviews, an analyst or officer provides the Hurricane Score model with information already known about an ATD-ISAP participant, including case management details and participant actions. The model is a quasi-binomial, binary classification machine learning (ML) model trained on inactive ATD-ISAP case data to identify patterns associated with prior absconding behavior. Based on the provided inputs, the model outputs a score from 1 to 5, with higher scores indicating a higher model-estimated risk that the individual may abscond. Officers may then consider this score, along with many other factors, when determining whether current levels of case management or technology assignment remain appropriate or should be adjusted. | Inactive case data from individuals enrolled in the ATD-ISAP program. | Yes | https://www.dhs.gov/sites/default/files/2023-08/privacy-pia-ice062-atd-august2023.pdf | Sex/Gender, Age | Yes | a) Yes | https://www.dhs.gov/sites/default/files/2023-08/privacy-pia-ice062-atd-august2023.pdf | Predictive ML techniques can produce misleading results, such as false positives, which could impact case management decisions if relied upon as a primary factor. For instance, an inaccurate hurricane score might lead to stricter or more lenient compliance or technology requirements for an individual. ERO mitigates this by using the score as one of many factors in determining case management or technology levels for individuals in the ATD program. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | b) Not applicable | d) Law, operational limitations, or governmentwide guidance precludes an opportunity for an individual to appeal | Direct usability testing | ||||
| Department Of Homeland Security | ICE | DHS-2457 | Facial Recognition for Locating Vulnerable Populations | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The use case addresses the challenge of identifying and locating members of vulnerable populations, such as unaccompanied minors who have crossed the border, whose identities and locations are unknown to law enforcement. | The facial recognition service reduces the time personnel spend manually searching for images online and helps them discover potentially relevant photographs or profiles that they might not otherwise find. This can improve the speed and effectiveness of efforts to identify and locate vulnerable individuals and support appropriate protective or assistance measures. | Investigators submit facial photos obtained through lawful means to an AI-enabled facial recognition service. The service compares these photos to publicly available online images to find visually similar faces. It returns possible matches, along with references to the public sources where those images were found, so personnel can review them in context. These results are treated as leads that may help identify a person or their associates, but they are not confirmations on their own. | 23/01/2025 | a) Purchased from a vendor | Law Enforcement Sensitive (LES) | Yes | Investigators submit facial photos obtained through lawful means to an AI-enabled facial recognition service. The service compares these photos to publicly available online images to find visually similar faces. It returns possible matches, along with references to the public sources where those images were found, so personnel can review them in context. These results are treated as leads that may help identify a person or their associates, but they are not confirmations on their own. | Law Enforcement Sensitive (LES) | Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-frs-054-may2020.pdf | No | a) Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-frs-054-may2020.pdf | The AI-enabled facial recognition service may return too many candidates, resulting in the collection of irrelevant personal information. Mitigation: The service only returns candidates meeting a set confidence score threshold, ranking results by highest confidence. Potential matches are used as investigative leads and require full validation through the investigative process. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | b) Not applicable | b) Not applicable | Other | ||||
| Department Of Homeland Security | ICE | DHS-2458 | Facial Recognition for National Security Investigations | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | This use case addresses the challenge of identifying individuals of interest in authorized national security investigations. | The facial recognition service helps investigators reduce the time needed to manually search for images and associated information online. By surfacing potentially relevant images that might otherwise be missed, it can improve the speed and effectiveness of national security investigations and allow investigators to focus their efforts on analysis, corroboration, and case-building. | Investigators submit facial photos obtained through lawful investigative means to an AI-enabled facial recognition service. The service compares these photos to publicly available online images to find visually similar faces. It returns candidate matches and links or references to the public sources where those images appear so investigators can review them in context and evaluate whether they may be relevant to a case. | 23/01/2025 | a) Purchased from a vendor | Law Enforcement Sensitive (LES) | Yes | Investigators submit facial photos obtained through lawful investigative means to an AI-enabled facial recognition service. The service compares these photos to publicly available online images to find visually similar faces. It returns candidate matches and links or references to the public sources where those images appear so investigators can review them in context and evaluate whether they may be relevant to a case. | Law Enforcement Sensitive (LES) | Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-frs-054-may2020.pdf | No | a) Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-frs-054-may2020.pdf | The AI-enabled facial recognition service may return too many candidates, resulting in the collection of irrelevant personal information. Mitigation: The service only returns candidates meeting a set confidence score threshold, ranking results by highest confidence. Potential matches are used as investigative leads and require full validation through the investigative process. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | b) Not applicable | b) Not applicable | Other | ||||
| Department Of Homeland Security | ICE | DHS-2459 | Facial Recognition for Investigations of Transnational Criminal Organizations | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The problem this use case solves is the challenge of identifying unknown individuals involved in transnational criminal activities, such as violent crimes, drug trafficking, human smuggling, and financial fraud. | The facial recognition service reduces the time investigators spend manually searching for images online and helps them discover potentially relevant photographs or profiles that they might not otherwise find. This can improve the speed and effectiveness of investigations into complex transnational criminal networks while allowing investigators to focus on analysis, corroboration, and case-building. | Investigators submit facial photos obtained through lawful investigative means to an AI-enabled facial recognition service. The service compares these photos to publicly available online images to find visually similar faces. It returns possible matches, along with links or references to the public sources where those images were found, so investigators can review them in context. These results are treated as investigative leads that may point to potential identities or locations, but they do not constitute confirmation on their own. | 23/01/2025 | a) Purchased from a vendor | Law Enforcement Sensitive (LES) | Yes | Investigators submit facial photos obtained through lawful investigative means to an AI-enabled facial recognition service. The service compares these photos to publicly available online images to find visually similar faces. It returns possible matches, along with links or references to the public sources where those images were found, so investigators can review them in context. These results are treated as investigative leads that may point to potential identities or locations, but they do not constitute confirmation on their own. | Law Enforcement Sensitive (LES) | Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-frs-054-may2020.pdf | No | a) Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-frs-054-may2020.pdf | The AI-enabled facial recognition service may return too many candidates, resulting in the collection of irrelevant personal information. Mitigation: The service only returns candidates meeting a set confidence score threshold, ranking results by highest confidence. Potential matches are used as investigative leads and require full validation through the investigative process. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | b) Not applicable | b) Not applicable | Other | ||||
| Department Of Homeland Security | ICE | DHS-2556 | AI-Assisted Resume Screening Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | a) High-impact | High-impact | Generative AI | This use case intends to solve the problem of human bias during resume reviews and the time-intensive process of reviewing candidate resumes. | This solution applies the same review criteria to every candidate’s resume. This reduces human cognitive bias and variability in how HR specialists evaluate candidate resumes. Additionally, this solution speeds up the time-to-hire by reducing the amount of time spent conducting manual candidate resume reviews. | The evaluation model compares each resume against the associated job requirements and provides a numerical score, a scoring group (red, yellow, green, or blue), related experience, and missing experience. The scoring group categorizes candidates based on the percentage of matching experience, with red indicating a weak candidate, yellow indicating moderate alignment, green indicating a strong candidate, and blue indicating that the system was unable to score the resume due to issues such as missing documents. | 01/01/2026 | c) Developed with both contracting and in-house resources | AIS | No | The evaluation model compares each resume against the associated job requirements and provides a numerical score, a scoring group (red, yellow, green, or blue), related experience, and missing experience. The scoring group categorizes candidates based on the percentage of matching experience, with red indicating a weak candidate, yellow indicating moderate alignment, green indicating a strong candidate, and blue indicating that the system was unable to score the resume due to issues such as missing documents. | OpenAI's GPT-4 is trained on common crawl and publicly available data. ICE does not provide any training data and uses the pre-trained base models as is. Pre-trained models used do not require training data. Human-evaluated resumes are compared to tool output for validation. Production data will be candidate resumes. | Yes | https://www.dhs.gov/sites/default/files/2025-03/25_0331_priv_pia-dhs-all-043a-talentacquisition-appendix-update.pdf | No | b) In-progress | https://www.dhs.gov/sites/default/files/2025-03/25_0331_priv_pia-dhs-all-043a-talentacquisition-appendix-update.pdf | In-Progress - potential impacts will be identified during AI Impact Assessment. | d) In-progress | b) Development of monitoring protocols is in-progess | Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | Establishment of an appropriate appeal process is in-progress | ||||
| Department Of Homeland Security | ICE | DHS-2577 | Mobile Fortify | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The AI is intended to solve the problem of confirming individuals’ identities in the field when officers and agents must work with limited information and access multiple disparate systems to identify individuals and retrieve existing data relevant to enforcement, investigations, and victim protection activities. | The use of AI in this process increases the speed and efficiency of identifying individuals and organizing identity information, supporting immigration enforcement, authorized investigations, and victim protection efforts. | Mobile Fortify runs on a mobile device and can capture facial images, contactless fingerprints, and photographs of identity documents. The application transmits this data to U.S. Customs and Border Protection (CBP) for submission to government biometric matching systems. Those systems use AI-based matching techniques, including facial recognition and fingerprint matching, to compare the captured data against existing records and return possible matches with associated biographic information. The tool also uses optical character recognition to extract text from identity documents to support additional checks. ICE does not own or interact directly with the AI models that perform biometric matching or optical character recognition. CBP owns and operates these models, and Mobile Fortify simply displays the results to ICE users. For additional details on the AI models that support the application, see CBP’s Mobile Fortify AI use case. | 20/05/2025 | c) Developed with both contracting and in-house resources | NEC is the third‑party vendor CBP uses. ICE accesses these capabilities through CBP and does not contract directly with NEC. | Yes | Mobile Fortify runs on a mobile device and can capture facial images, contactless fingerprints, and photographs of identity documents. The application transmits this data to U.S. Customs and Border Protection (CBP) for submission to government biometric matching systems. Those systems use AI-based matching techniques, including facial recognition and fingerprint matching, to compare the captured data against existing records and return possible matches with associated biographic information. The tool also uses optical character recognition to extract text from identity documents to support additional checks. ICE does not own or interact directly with the AI models that perform biometric matching or optical character recognition. CBP owns and operates these models, and Mobile Fortify simply displays the results to ICE users. For additional details on the AI models that support the application, see CBP’s Mobile Fortify AI use case. | ICE does not own and did not train, test, or evaluate the AI models that power the Mobile Fortify application. See CBP’s Mobile Fortify AI use case for details on the application’s underlying AI models. | Yes | Yes | b) In-progress | In-Progress - potential impacts will be identified during AI Impact Assessment. | d) In-progress | b) Development of monitoring protocols is in-progess | b) Development of monitoring protocols is in-progess | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | In-progress | ||||||
| Department Of Homeland Security | ICE | DHS-2666 | License Plate Capture and Analysis | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The AI is intended to solve the problem of time-consuming manual reviews of license plate images and data, which makes it challenging for investigators to identify relevant vehicle movements and patterns. | The AI capabilities reduce the need for manual review of large numbers of license plate images and logs. By streamlining plate reading and providing flexible search and summarization tools, the system helps investigators more quickly identify potentially relevant vehicle movements and patterns that might otherwise be missed, thereby improving the efficiency and effectiveness of investigative work. | The system processes images and metadata from ICE-owned and commercial license plate recognition cameras. It uses computer vision and optical character recognition to detect and read license plates and to capture associated information such as time, location, vehicle make and model, color, and visible characteristics like damage or signage. An integrated natural language interface powered by a large language model allows users to ask questions in everyday language, such as requesting detections of a particular plate or vehicle description over a period of time. The system converts these questions into structured database queries and returns relevant records, along with concise text summaries of vehicle movements. The system’s AI-enabled outputs are machine-read license plate numbers with associated time, location, and vehicle metadata, as well as natural language search results and summaries produced by the LLM interface. The LLM translates user questions into structured searches over the LPR data and summarizes relevant vehicle detections into concise descriptions of vehicles and their sightings. While license plate information can be used as a link to other personally identifiable information, the LPR system does not automatically link license plate records to driver or vehicle registration databases. Any such queries must be conducted separately in accordance with applicable laws and policies. | 19/09/2025 | a) Purchased from a vendor | Motorola | No | The system processes images and metadata from ICE-owned and commercial license plate recognition cameras. It uses computer vision and optical character recognition to detect and read license plates and to capture associated information such as time, location, vehicle make and model, color, and visible characteristics like damage or signage. An integrated natural language interface powered by a large language model allows users to ask questions in everyday language, such as requesting detections of a particular plate or vehicle description over a period of time. The system converts these questions into structured database queries and returns relevant records, along with concise text summaries of vehicle movements. The system’s AI-enabled outputs are machine-read license plate numbers with associated time, location, and vehicle metadata, as well as natural language search results and summaries produced by the LLM interface. The LLM translates user questions into structured searches over the LPR data and summarizes relevant vehicle detections into concise descriptions of vehicles and their sightings. While license plate information can be used as a link to other personally identifiable information, the LPR system does not automatically link license plate records to driver or vehicle registration databases. Any such queries must be conducted separately in accordance with applicable laws and policies. | The vendor trained its LPR system using a combination of real-world traffic camera footage, synthetic plate images, and public datasets containing diverse license plate formats from various regions. The models are optimized for high accuracy in different lighting, weather, and motion conditions, and are fine-tuned using data from deployments across cities and agencies. | Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-lpr-january2018.pdf | No | b) In-progress | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-lpr-january2018.pdf | In-Progress - potential impacts will be identified during AI Impact Assessment. | d) In-progress | b) Development of monitoring protocols is in-progess | b) Development of monitoring protocols is in-progess | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | In-progress | ||||
| Department Of Homeland Security | ICE | DHS-362 | Facial Recognition for Investigations of Child Sexual Exploitation and Abuse | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | This use case intends to solve the problem of identifying unknown victims and offenders depicted in child sexual abuse material. | The tool helps more quickly identify previously unknown victims and offenders who might not be discovered through manual investigative methods alone. By highlighting potentially relevant photographs or profiles across publicly available online images, it can accelerate victim identification and rescue efforts and support the disruption and prosecution of offenders who might otherwise remain undetected. | Homeland Security Investigations Child Exploitation Investigations Unit personnel submit newly discovered and unidentified child sexual abuse material images obtained through lawful investigative means to an AI-enabled facial recognition service. The service compares these photos to publicly available online images to find visually similar faces. It returns possible matches, along with links or references to the public sources where those images were found, so investigators can review them in context. These results are treated as investigative leads that may point to potential identities or locations, but they do not constitute confirmation on their own. | 01/12/2020 | a) Purchased from a vendor | Law Enforcement Sensitive (LES) | Yes | Homeland Security Investigations Child Exploitation Investigations Unit personnel submit newly discovered and unidentified child sexual abuse material images obtained through lawful investigative means to an AI-enabled facial recognition service. The service compares these photos to publicly available online images to find visually similar faces. It returns possible matches, along with links or references to the public sources where those images were found, so investigators can review them in context. These results are treated as investigative leads that may point to potential identities or locations, but they do not constitute confirmation on their own. | Law Enforcement Sensitive (LES) | Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-frs-054-may2020.pdf | No | a) Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-frs-054-may2020.pdf | The AI-enabled facial recognition service may return too many candidates, resulting in the collection of irrelevant personal information. Mitigation: The service only returns candidates meeting a set confidence score threshold, ranking results by highest confidence. Potential matches are used as investigative leads and require full validation through the investigative process. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | b) Not applicable | b) Not applicable | Other | ||||
| Department Of Homeland Security | TSA | DHS-135 | Low Probability of False Alarm (Low-Pfa) Algorithm for on-person screening. | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | a) High-impact | High-impact | Computer Vision | Increase passenger throughput via improve detection performance and decreasing alarm rates and passengers touch rates by 50% | The purpose is to reduce alarm rates while providing increased passenger throughput and experience. Utilizes Machine Learning (ML) to improve detection performance while decreasing alarm rates and passengers touch rates. The algorithm is gender agnostic which no longer requires officers to select a passengers gender prior to being scanned. Advanced imaging technology (AIT) throughput and AIT utilization have increased with this new algorithm. Note: Once the algorithm is trained, it is locked down and no longer learning. | The AI outputs target coordinates to the operator viewing station which is viewed as a bounding box on a representative human figure. | 12/12/2022 | a) Purchased from a vendor | Leidos, Rohde & Schwarz | No | The AI outputs target coordinates to the operator viewing station which is viewed as a bounding box on a representative human figure. | Vendor AITs are tested in a laboratory environment using mock passengers. Statistical tests are done for false and true alarms, a performance measure for detection. Statistical tests are done for probability of false alarm and probability of detection, a performance measure for detection capability. | No | https://www.dhs.gov/sites/default/files/publications/privacy-tsa-pia-32-d-ait.pdf | Yes | a) Yes | https://www.dhs.gov/sites/default/files/publications/privacy-tsa-pia-32-d-ait.pdf | Security risks include false negatives allowing threats to get throught the sterile side of the airport or high false alarm rates slowing opperations. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | a) Yes, an appropriate appeal process has been established | General solicitations of feedback and comments from the public, Direct usability testing | ||||
| Department Of Homeland Security | TSA | DHS-327 | Credential Authentication Technology with Camera System (CAT-2) and AutoCAT (CAT-2 in an e-gate form factor) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Improving the detection of imposters. | The Transportation Security Administration (TSA) uses AI-based, one-to-one (1:1) and one-to-many (1:n) facial matching technologies at some checkpoints to assist human reviewers with traveler identity verification. The purpose and expected benefits of the technology include increased speed and accuracy of identity verification at the checkpoint while improving detection of imposters. | The system produces a recommendation to the Transportation Security Officer (TSO) to indicate if person presenting the identity document is similar to the face on the photo ID document. In the event of a non-match, the TSO is responsible for additional identity verification steps to verify the identity of the traveler. | 01/09/2023 | a) Purchased from a vendor | IDEMIA Identity, Security USA LLC | Yes | The system produces a recommendation to the Transportation Security Officer (TSO) to indicate if person presenting the identity document is similar to the face on the photo ID document. In the event of a non-match, the TSO is responsible for additional identity verification steps to verify the identity of the traveler. | During the development, the original equipment manufacturer trained the technology using their own data for 1:1 facial comparison. Prior to initial deployment, DHS S&T conducted evaluation of the biometrics algorithms using volunteers for facial matching validation. During TSA's continuous evaluation, a photo is taken of the passenger and compared to the photo on the identification to determine whether it was an actual match to the individual. | Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-tsa046b-tdc-june2020.pdf | No | a) Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-tsa046b-tdc-june2020.pdf | In the event of a non-match, the traveler may make a second attempt ot the TSA perform additional identity verification steps to verify the identity of the traveler. This process may add between 20 seconds or a few minutes to the identity verification and security screening process. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | a) Yes, an appropriate appeal process has been established | Direct usability testing | ||||
| Department Of Homeland Security | TSA | DHS-345 | PreCheck Touchless Identity Solution | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Assist human reviewers with traveler identity verification. | TSA is using Facial Comparison to verify a passenger’s identity at its security checkpoints locations using the CBP Traveler Verification Service (TVS). This process streamlines passenger identity verification, increasing the speed of security checks while maintaining a high degree of safety for all passengers and crewmembers. | TSA is leveraging CBP's TVS system technology as an optional process for passengers traveling via certain airports who wish to further expedite their TSA PreCheck or crew member ID verification process. This additional TSA PreCheck feature is voluntary, and passengers may opt-out of the process at any time and instead choose the standard identity verification by a Transportation Security Officer (TSO). Crew members that wish to opt-out will be sent to the security checkpoint to process through screening. | 01/10/2018 | c) Developed with both contracting and in-house resources | CBP TVS, NEC Algorithm | Yes | TSA is leveraging CBP's TVS system technology as an optional process for passengers traveling via certain airports who wish to further expedite their TSA PreCheck or crew member ID verification process. This additional TSA PreCheck feature is voluntary, and passengers may opt-out of the process at any time and instead choose the standard identity verification by a Transportation Security Officer (TSO). Crew members that wish to opt-out will be sent to the security checkpoint to process through screening. | Data includes images captured during prior CBP inspections, U.S. passport and visa records, immigration records, and photographs from DHS encounters. TSA evaluates the matching score through a quality assurance process to compare the “ground truth” data from the passenger identification information against the determination made by the algorithm. The passenger information captured during quality assurance is not retained by TSA. | Yes | https://www.dhs.gov/sites/default/files/2023-11/23_1128_priv_pia_tsa_046d_tdc.pdf | Yes | a) Yes | https://www.dhs.gov/sites/default/files/2023-11/23_1128_priv_pia_tsa_046d_tdc.pdf | The key risk is the degradation of the TVS verification to degrade overtime based on the parameters of assessment for comparing images to templates. The facial recognition does not enter or retrieve data, it is only comparative. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | General solicitations of feedback and comments from the public, Direct usability testing | ||||
| Department Of Homeland Security | USCIS | DHS-130 | Text Analytics Data Science Sentence Similarity Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The Text Analytics capability employs machine learning and data graphing techniques to identify patterns that may indicate potential fraud, national security, and/or public safety concerns by scanning the digitized narrative sections of the associated applications and looking for common language patterns. | Text Analytics augments the tedious and time-consuming manual process to identify potential fraud, national security, and/or public safety concerns and enables the identification of such concerns across jurisdictional boundaries. It increases the integrity of immigration programs, strengthens officers’ confidence in their work, and contributes to the reduction in customer wait times. | Text Analytics does not make predictions, recommendations, or decisions. It is merely a research tool that identifies potential patterns, while remaining agnostic as to whether those patterns identify potential fraud, national security, and/or public safety concerns. Instead, trained staff evaluate the patterns to determine whether they identify potential concerns and then validate and/or invalidate those potential concerns through the course of their investigations or adjudications. | 01/11/2019 | c) Developed with both contracting and in-house resources | Inadev | Yes | Text Analytics does not make predictions, recommendations, or decisions. It is merely a research tool that identifies potential patterns, while remaining agnostic as to whether those patterns identify potential fraud, national security, and/or public safety concerns. Instead, trained staff evaluate the patterns to determine whether they identify potential concerns and then validate and/or invalidate those potential concerns through the course of their investigations or adjudications. | Text Analytics stores information extracted from benefit forms and supporting documents, focusing on the narrative portions of those documents. | Yes | https://www.dhs.gov/publication/dhsuscispia-085-pangaea-pangaea-text | Yes | a) Yes | https://www.dhs.gov/publication/dhsuscispia-085-pangaea-pangaea-text | There is small risk of false positives or false negatives due to the model. The risk is mitigated through a manual review of any information produced from the tool. Text Analytics is a decision support tool. Text Analytics does not make recommendations of fraud or benefit / adjudication decisions, any decisions made from information stored in the tool is conducted through a manual review by a USCIS employee. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | a) Yes, an appropriate appeal process has been established | Direct usability testing | ||||
| Department Of Homeland Security | USCIS | DHS-181 | Automated Realtime Global Organization Specialist (ARGOS) for Company Registration Submissions to E-Verify | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | The goal of the use case is to leverage sentiment analysis in ARGOS to streamline the process and accelerate work for the individual who is researching a company that has submitted its information for registration to E-Verify. | ARGOS sentiment analysis produces a risk score and keyword extraction identifies the keyword category of interest to the VAC MPAs (management and program analyst) for the aggregated open-source information to help quickly identify any pertinent information to aid the MPAs in their open-source investigation of company applications. This saves potentially thousands of MPA man hours in open-source investigation and creates a single source-of-truth for each MPAs investigation of a company application. This, in turn, allows for quicker application processing and, if risk of company fraud exists, much faster referral processing time quickening the next-step referral to FDNS for further investigations. | Responses back to a user dashboard accessible internally only by VAC Management and Program Analyst (MPA) personnel. Keywords relating to the MPA's work interest are extracted if present and risk scores are assigned to the open-source collected information. The data is presented to the MPA on the GUI (graphical user interface) dashboard. | 12/08/2023 | c) Developed with both contracting and in-house resources | IBM | Yes | Responses back to a user dashboard accessible internally only by VAC Management and Program Analyst (MPA) personnel. Keywords relating to the MPA's work interest are extracted if present and risk scores are assigned to the open-source collected information. The data is presented to the MPA on the GUI (graphical user interface) dashboard. | The fine-tuned dataset is collected from open-source queries from the Bing API connected to the ARGOS system. This is publicly available data that doesn't contain any PII. | No | Yes | a) Yes | Lack of Domain-Specific Accuracy: model tested on company data across different industries resulted in inconsistent performance. Limited Generalization to Unseen Data: model’s performance on validation datasets lower than on training data, indicating potential overfitting. Misinterpretation of Sentiment: instances of sarcasm/irony not recognized. All risks identified in testing and evaluation phases. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | General solicitations of feedback and comments from the public | ||||||
| Department Of Homeland Security | USCIS | DHS-2384 | Verification Match Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | By consolidating these into a single, unified Verification Match Model within a separate microservice, the use case aims to improve the accuracy of responses and reduce the need for manual review. ML plays a key role in the continuous improvement of these models, ultimately reducing the need for manual case reviews. | Leveraging AI in the USCIS verification matching process of known records across systems is beneficial because it streamlines existing USCIS review by 1) improving associated system accuracy, 2) reducing human-error by automating person-and-record match scoring, and 3) matching at a higher volume than traditional tools or manual processes can capably achieve. | A recommendation and score that indicates person-and-record match probability used by verification systems (E-verify and SAVE) to improve accuracy in initial system response | 22/05/2024 | c) Developed with both contracting and in-house resources | IBM | Yes | A recommendation and score that indicates person-and-record match probability used by verification systems (E-verify and SAVE) to improve accuracy in initial system response | Individual's Names, Dates of Birth, and Document Identifiers from USCIS sourced data contained in CIS2, C3, ELIS, and Global. These are all private datasets within USCIS. | Yes | https://www.dhs.gov/publication/dhsuscispia-030f-e-verify-mobile-app-usability-testing, https://www.dhs.gov/publication/systematic-alien-verification-entitlements-save-program | Yes | a) Yes | https://www.dhs.gov/publication/dhsuscispia-030f-e-verify-mobile-app-usability-testing, https://www.dhs.gov/publication/systematic-alien-verification-entitlements-save-program | Model-match performance in terms of accuracy, precision, and recall. Identified via model evaluation and analysis for these performance statistics. | d) In-progress | a) Yes, sufficient monitoring protocols have been established | Establishment of sufficient and periodic training is in-progress | a) Yes | a) Yes, an appropriate appeal process has been established | In-progress | ||||
| Department Of Homeland Security | USCIS | DHS-413 | I-765 - USCIS Facial Recognition through IDENT (1:1 Face Recognition/Validation) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Using the Automated Biometric Identification System (IDENT) makes this process nearly instant and greatly enhances processing efficiency without compromising the effectiveness of current identity verification methods. Additionally, new rules created by the FBI’s Compact Council require biometric verification to perform fingerprint resubmissions for different filing reasons than the original fingerprint capture. Facial verification brings USCIS into compliance with this rule change. Furthermore, performing facial verification increases the integrity of information and identities by identifying conflicts early on, preventing issues from becoming pervasive across immigration systems. | This will allow the user to complete the biometric verification requirement without having to attend an appointment at am Applicant Support Center. This reduces the burden on the beneficiary as well as reducing demands on USCIS Applicant Service Center resources. | Match or no match response from IDENT. | 12/11/2024 | c) Developed with both contracting and in-house resources | Pluribus Digital | Yes | Match or no match response from IDENT. | The Office of Biometric Identity Management (OBIM) conducts manual testing and evaluation of its fingerprint, latent print, iris, and facial comparison algorithms. This process relies on carefully curated datasets, expert human analysis, and mathematical assessment. | Yes | https://www.dhs.gov/sites/default/files/2024-11/24_0930_priv_pia-dhs-uscis-cpms-060d.pdf | Yes | a) Yes | https://www.dhs.gov/sites/default/files/2024-11/24_0930_priv_pia-dhs-uscis-cpms-060d.pdf | Potential mismatch of face images and/or bias based on demographic data held by USCIS | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | a) Yes, an appropriate appeal process has been established | General solicitations of feedback and comments from the public, Other | ||||
| Department Of Homeland Security | USCIS | DHS-55 | Person-Centric Identity Services Deduplication Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Critical to the success of PCIS is the entity resolution and de-deduplication of individual records from various systems of records to create a complete picture of a person. Using machine learning (ML), the model can identify which case management records belong to the same unique individual with a high degree of confidence. This allows PCIS to compile a full immigration history for an individual without the need for time-consuming research across multiple disparate systems. The de-duplication model plays a critical role in the entity resolution and surfacing of a person and all their associated records. The ML models are more resilient to fuzzy matches and handle varying data fill rates more reliably. | Using Machine Learning allows us to improve entity resolution as compared to rule based system. PCIS offers the ability to see a person's immigration history organized in one place. Specific benefits do or will include: an organized summary view of the identity with the individual's latest photo from PCIS; full immigration history including receipts associated with the applicant, regardless of case management system; mailing, physical, and safe history of the individual organized in reverse chronological order, allowing users to easily find the most recent address; and all identifiers associated with the applicant, including A-Numbers, FINs, SSNs, SSNs, ELIS account numbers, passport numbers, etc. | Numerical likelihood score which is used to determine if the record belongs to the individual. Likelihood scores are subjected to a high threshold (.98, maximum 1) to assess whether the record belongs to the individual. | 01/02/2023 | c) Developed with both contracting and in-house resources | MetroIBR | Yes | Numerical likelihood score which is used to determine if the record belongs to the individual. Likelihood scores are subjected to a high threshold (.98, maximum 1) to assess whether the record belongs to the individual. | USCIS-only data derived from 7 form-processing source systems including C3, ELIS, CPMS, GLOBAL, CIS2, AR-11, CAMINO. | Yes | https://www.dhs.gov/sites/default/files/2022-12/privacy-pia-uscis-pia087-pcis-december2022.pdf | Yes | a) Yes | https://www.dhs.gov/sites/default/files/2022-12/privacy-pia-uscis-pia087-pcis-december2022.pdf | There is small risk of false positives or negatives, which are identified and sent to Manual Resolution Queue. The queue is processed by authorized and trained personnel. Human review is still done for the actual benefit or request being sought. AI is used to identify the person seeking the benefit or request. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | Direct usability testing | ||||
| Department Of Homeland Security | USSS | DHS-415 | Criminal Investigations (OBIM) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The intended problem to solve is the identification of unknown victims and suspects involved in crimes that undermine the integrity of U.S. financial and payment systems. By using facial recognition and biometric image comparison, the USSS aims to efficiently and accurately identify individuals connected to criminal investigations, thereby supporting law enforcement efforts to detect, arrest, and prevent such crimes. | The intended purpose of this AI is so that USSS personnel may submit available photographs or video stills of these unknown persons as probe images (facial images or templates searched against the gallery of an FRS) to other government agencies for comparison against their image galleries. The agencies will query their image galleries of known persons and may provide lists of potential matches. They may use the potential matches to produce investigative leads which will assist in the further identification of victims or suspects. Additionally, we may request another government agency to conduct a one-to-one comparison of two photographs or video stills for investigative use. | The system will query image galleries of known persons and may provide lists of potential matches. USSS personnel may use the potential matches to produce investigative leads which will assist in the further identification of victims or suspects. | 01/01/2017 | a) Purchased from a vendor | NEC | Yes | The system will query image galleries of known persons and may provide lists of potential matches. USSS personnel may use the potential matches to produce investigative leads which will assist in the further identification of victims or suspects. | Trained on mugshot data and paid volunteers. | Yes | https://www.dhs.gov/sites/default/files/2024-09/24_0912_privacy-pia-usss033-facialrecognition_0.pdf | Yes | a) Yes | https://www.dhs.gov/sites/default/files/2024-09/24_0912_privacy-pia-usss033-facialrecognition_0.pdf | The product was developed by the NEC using AI and Deep Machine Learning to train the algorithm, however, the current NEC product that is used by OBIM in the production environment does not use AI to continue to train the NEC algorithm on production data. The fact that OBIM/NEC do not use AI on production data to continue to train the algorithm significantly limits the risks associated with the use of AI and ML on the face candidate list process. | c) Yes – by the CAIO | a) Yes, sufficient monitoring protocols have been established | Yes, sufficient and periodic training has been established | a) Yes | a) Yes, an appropriate appeal process has been established | Direct usability testing | ||||
| Department Of Homeland Security | CBP | DHS-101 | The Advanced Trade Analytics Platform (ATAP) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | International Affairs | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not produce an action or serve as a principal basis for a decision that has the potential to significantly impact the safety of human-life or well-being, climate or environment, critical infrastructure, or strategic assets or resources. Additionally, the AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. | Classical/Predictive Machine Learning | ATAP aims to provide insights into trends and behaviors in trade activity to support a proactive risk management and enforcement posture in the agency’s mission execution. | To create efficiencies and unlock key insights in CBP's trade mission execution through the application of data analytics, machine learning, and AI. | Model output is provided in dashboards and other visualization mechansims for operator assessment and action determination. | 07/03/2022 | c) Developed with both contracting and in-house resources | Elder Research Inc, DevTech Systems Inc., Guidehouse | Yes | Model output is provided in dashboards and other visualization mechansims for operator assessment and action determination. | ATAP relies on CBP source system information from CBP's ACE, ATS, and SEACATS systems, including import/export filing information, compliance reviews, targeting, seizure, and fine/penalty information. | Yes | Yes | |||||||||||||
| Department Of Homeland Security | CBP | DHS-183 | Public Information Compilation for Travel Threat Analysis (Dataminr) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not produce an action or serve as a principal basis for a decision that has the potential to significantly impact the safety of human-life or well-being, climate or environment, critical infrastructure, or strategic assets or resources. The AI output only provides the officer with complied public information. The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. | Natural Language Processing (NLP) | Provides situational awareness of open-source social media and news reporting to enhance CBP Screening, Vetting and security of the homeland. | This tool significantly reduces the amount of time it takes for users to collect and compile commercially available open-source information when attempting to identify possible threats related to national security, border violence, CBP facilities, CBP employee safety and other topics with a CBP-nexus involving air, sea, and land travel to and/or from the U.S. | The AI output is compiled publicly available information for awareness. CBP employees further research the information, including reading the source information, to determine if there is a possible threat. | 01/11/2024 | a) Purchased from a vendor | Dataminr | Yes | The AI output is compiled publicly available information for awareness. CBP employees further research the information, including reading the source information, to determine if there is a possible threat. | Training data was collected from several publicly available, social media, and media outlet sites. This approach ensured the model was trained across several different groups representing an array of possible language types and vernaculars so as not to cause bias toward a specific demographic. Along with the above open-source data, the vendor leverages a mix of proprietary data to ensure the data is representative of real-world conditions and context. | No | https://www.dhs.gov/publication/dhscbppia-058-publicly-available-social-media-monitoring-and-situational-awareness | No | https://www.dhs.gov/publication/dhscbppia-058-publicly-available-social-media-monitoring-and-situational-awareness | |||||||||||
| Department Of Homeland Security | CBP | DHS-188 | Airship Outpost for Conveyance Identification | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI is not being used for tracking or analysis. It is simply identifying the alpha-numeric values of the conveyances in front of it. | Computer Vision | CBP must efficiently and accurately identify and document cross-border conveyances (aircraft, vessels, automobiles). | Outpost uses machine learning to identify the type of conveyance in front of the sensor camera and uses this information to determine where to capture the conveyance's identification (License plate, hull number, tail number, etc). Conveyance identifiers exist in different locations on different conveyance types. By identifying the type of conveyance, the system knows where to focus the capture mission relevant information. | Identification and classification of the type of conveyance (e.g., automobile, aircraft, watercraft) including license plates, hull numbers, or tail numbers for monitoring purposes. | 01/09/2023 | a) Purchased from a vendor | Airship | Yes | Identification and classification of the type of conveyance (e.g., automobile, aircraft, watercraft) including license plates, hull numbers, or tail numbers for monitoring purposes. | The datasets that the system uses are GOTS and LES. Purchased commercial data sources also used to enhance the value of the system. The AI only identifies the type of conveyance captured by the camera and determines the location of the alphanumeric identifiers used to identify it, such as license plates, hull numbers, or tail numbers. This information, along with an image of the conveyance and the date/time, is sent back. | Yes | https://www.dhs.gov/sites/default/files/2022-05/privacy-pia-cbp-tecs%20platform-april2022.pdf | Yes | https://www.dhs.gov/sites/default/files/2022-05/privacy-pia-cbp-tecs%20platform-april2022.pdf | |||||||||||
| Department Of Homeland Security | CBP | DHS-2383 | Unmanned Aircraft Collision Avoidance | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case avoids collisions for small unmanned aircraft systems. It operates a video feed that activates the obstacle avoidance features. The obstacle avoidance capability assists the pilot on the ground to avoid colliding the unmanned aircraft with objects such as man-made structures, vehicles, trees, wires, or other objects in the projected flight path. The pilot receives a visual alert on the hand controller, indicating a possible collision and in some cases the aircraft will slow down, change direction to avoid the obstacle, or stop. | Computer Vision | The use case solves the problem of navigating complex environments autonomously while ensuring obstacle avoidance in real time. By relying on AI-based 3D scanning functions instead of GPS, the system enhances safety and precision in drone operations, reducing the risk of collisions and enabling efficient, reliable use in diverse mission scenarios. It addresses the challenge of maintaining situational awareness and operational accuracy during unmanned aircraft missions, providing pilots with visual alerts to prevent potential collisions. | The platform operates on video feed only which in turn activates the obstacle avoidance on the aircraft where the AI capabilities are housed. The system supports the streamlined intake process while maintaining the accuracy and reliability of identity verification. | The pilot of the sUAS will receive a visual alert on the hand controller, indicating a possible collision. | 01/10/2022 | a) Purchased from a vendor | Skydio and Xtender | Yes | The pilot of the sUAS will receive a visual alert on the hand controller, indicating a possible collision. | Live flight testing data of the platform in test and operational environments. | No | No | |||||||||||||
| Department Of Homeland Security | CBP | DHS-24 | Entity Resolution | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not produce an action or serve as a principal basis for a decision that has the potential to significantly impact the safety of human-life or well-being, climate or environment, critical infrastructure, or strategic assets or resources. AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. | Classical/Predictive Machine Learning | The use case leverages advanced technology to improve the analysis of global trade data and enhance its ability to identify risks within supply chains. By using AI tools to organize and analyze complex datasets, CBP can uncover patterns and relationships that may indicate unethical practices, such as forced labor. This innovative approach supports efforts to ensure compliance with trade laws, protect economic security, and promote fair and ethical trade practices. | Detection research of force labor within supply chain utilizing analytical AI platform. | Detection of potential force labor within supply chain. | 01/05/2023 | c) Developed with both contracting and in-house resources | Altana | No | Detection of potential force labor within supply chain. | Altana utilizes a combination of commercial, public, and proprietary data sources to build a searchable and traversable graph of global trade. These include bills of lading, customs declarations, and exclusive proprietary documentation from first-party logistics providers. | No | https://www.dhs.gov/sites/default/files/2023-09/23_0926_privacy-p%E2%81%AEia-cbp003c-acemodernizations.pdf | No | https://www.dhs.gov/sites/default/files/2023-09/23_0926_privacy-p%E2%81%AEia-cbp003c-acemodernizations.pdf | |||||||||||
| Department Of Homeland Security | CBP | DHS-2417 | Process Efficiency Traveler Identity for Airline Check-in and Bag Drop | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This is an option for travelers, not a requirement. If a traveler chooses to use it and the service cannot match a traveler, the traveler may continue check-in/bag drop via another means. CBP does not make any decision or action based on a no-match. | Computer Vision | These use cases facilitate identity verification leveraging TVS. | The TVS Biometric matching service is a cloud-based facial biometric matching service that enables CBP, External Partners, and Other Government Agencies (OGA) to match a passenger’s identity against a trusted source, throughout the travel continuum which improves traveler facilitation and reduces manual identity verification. | Leverages DHS facial matching technologies to provide a match or no match response. | 01/10/2018 | c) Developed with both contracting and in-house resources | NEC | Yes | Leverages DHS facial matching technologies to provide a match or no match response. | Border Crossing Information. | Yes | https://www.dhs.gov/sites/default/files/2023-11/23_1128_priv_pia_tsa_046d_tdc.pdf | Yes | https://www.dhs.gov/sites/default/files/2023-11/23_1128_priv_pia_tsa_046d_tdc.pdf | |||||||||||
| Department Of Homeland Security | CBP | DHS-310 | Customs Broker License Exam - Proctor Support | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This is not a high impact case because this would only be used in for examinees taking the Customs Broker License Exam and not used in requests for federal services, processes, and benefits to include loans and access to public housing. This is just a feature that the CBP exam vendor uses to detect cheating during the exam. | Computer Vision | Detect potential cheating during the Customs Broker License Exam. | The model supports remote proctoring of the exam and ensure the integrity of the testing process, by ensuring the exam is conducted under secure conditions, preventing cheating or fraud, while also verifying the identity of exam takers to confirm they meet the necessary requirements. | Integrity reports, identity confirmation, proctoring complicance feedback, and test results. | 31/03/2023 | c) Developed with both contracting and in-house resources | PDRI | No | Integrity reports, identity confirmation, proctoring complicance feedback, and test results. | PDRI trains its AI models for assessments by combining expert human ratings with robust data, using seasoned raters to score responses first, then training AI on these expert-validated examples, and continuously testing the AI's outputs against human judgments to ensure accuracy, fairness, and adherence to psychological testing standards. | Yes | https://www.dhs.gov/sites/default/files/2023-04/privacy-pia-cbp077-bmp-march2023.pdf.pdf | No | https://www.dhs.gov/sites/default/files/2023-04/privacy-pia-cbp077-bmp-march2023.pdf.pdf | |||||||||||
| Department Of Homeland Security | CBP | DHS-35 | Autonomous Surveillance Tower | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. The AI provides alerts when it detects the presence of an IoI (i.e., persons, vehicles, animals) in the image frame. With regard to persons, this computer vision application is trained to determine if the object in the image frame is a person with a certain of level of confidence and not another object that may be shaped similarly to a person. After the alert of a detection, a trained agent or user, reviews the image to identify and classify the activity taking place. The AI merely alerts to the presence of an item it was trained to detect. This is not a biometric system and does not identify or track specific individuals. | Classical/Predictive Machine Learning | CBP's limited manpower constrains its ability to manually monitor all areas of the border at all times. To account for this limitation, surveillance and sensor systems assist in monitoring the border. | AST machine learning assisted system is augmenting the U.S. Border Patrol by enhancing the capabiltiies of individual users when carrying out the domain awareness mission. The expected benefit is to have ability of single person to monitor magnitude greater area than could be done with conventional CCTV or human surveillance. The ultimate outcome for the agency and the public is greater availability of the agents to solve and address more complex tasks and allow for better strategic/tactical deployment of existing resources and personnel. | The AI provides alerts when it detects the presence of an IoI (i.e., persons, vehicles, animals) in the image frame. With regard to persons, this computer vision application is trained to determine if the object in the image frame is a person with a certain of level of confidence and not another object that may be shaped similarly to a person. After the alert of a detection, a trained agent or user, reviews the image to identify and classify the activity taking place. The AI merely alerts to the presence of an item it was trained to detect. This is not a biometric system and does not identify or track specific individuals. | 01/01/2020 | a) Purchased from a vendor | Anduril | Yes | The AI provides alerts when it detects the presence of an IoI (i.e., persons, vehicles, animals) in the image frame. With regard to persons, this computer vision application is trained to determine if the object in the image frame is a person with a certain of level of confidence and not another object that may be shaped similarly to a person. After the alert of a detection, a trained agent or user, reviews the image to identify and classify the activity taking place. The AI merely alerts to the presence of an item it was trained to detect. This is not a biometric system and does not identify or track specific individuals. | Meta-data and data created by CBP. Generally comprised of agent adjudications of autonomous sensory inputs of items of interests by the system. | No | https://cbpgov.sharepoint.com/:b:/r/sites/AutonomousSurveillanceTowerASTProgram/Shared%20Documents/General/Tech,%20Cyber,%20Eng%20docs,%20Anduril%20docs,%20Data%20sheets/AST%20CyberSec%20-%20ATOs,%20PTAs,%20ATTs/PTAs/Disposition%20PTA,%20CBP%20-%20Autonomous%20Surveillance%20Towers%20(AST),%2020230906,%20PRIV%20Final.pdf?csf=1&web=1&e=Q9OWDC | Yes | https://cbpgov.sharepoint.com/:b:/r/sites/AutonomousSurveillanceTowerASTProgram/Shared%20Documents/General/Tech,%20Cyber,%20Eng%20docs,%20Anduril%20docs,%20Data%20sheets/AST%20CyberSec%20-%20ATOs,%20PTAs,%20ATTs/PTAs/Disposition%20PTA,%20CBP%20-%20Autonomous%20Surveillance%20Towers%20(AST),%2020230906,%20PRIV%20Final.pdf?csf=1&web=1&e=Q9OWDC | |||||||||||
| Department Of Homeland Security | CBP | DHS-37 | Automated Item of Interest Detection | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. The AI runs on video and images captured from lawfully deployed technologies used to support the U.S. Border Patrol mission between Ports of Entry. The AI provides alerts when it detects the presence of an IoI, such as persons, vehicles, animals in the image frame. With regard to persons, this computer vision application is trained to determine if the object in the image frame is a person with a certain of level of confidence and not another object that may be shaped similarly to a person. After the alert of a detection, a trained agent or user, reviews the image to identify and classify the activity taking place. The AI merely alerts to the presence of an item it was trained to detect. | Classical/Predictive Machine Learning | The software is designed to analyze photographs and video feeds captured by field imaging equipment for review by U.S. Border Patrol (USBP) agents and personnel. Using proprietary software, the system processes and annotates images to identify whether they contain human subjects, animals, or vehicles. The system is designed to incorporate future enhancements that expand its detection capabilities and to improve accuracy based on user feedback. | The software analyzes images and video that are taken by operationally deployed equipment, which are then fed into CBP systems for review by USBP agents, Office of Field Operations (OFO) officers, and other CBP users. It provides quick identification of people either crossing into the U.S. at a time and place other than designated for entry, circumventing security at a port of entry, or those already inside the U.S. trying to elude capture, as well as the ability for human operators to quickly determine if subjects in an image are, in fact, human. | The system creates a layer which is overlaid over the image to produce a box around items of interest it has determined to be likely human beings. | 31/01/2020 | c) Developed with both contracting and in-house resources | Matroid | Yes | The system creates a layer which is overlaid over the image to produce a box around items of interest it has determined to be likely human beings. | All of the image data fed to the models are owned by USBP. | No | Yes | |||||||||||||
| Department Of Homeland Security | CBP | DHS-38 | Vessel Detection | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The computer vision application scans a large grid within the viewshed of the high-definition camera. It then attempts to detect small vessels detecting entering and exiting small waterways in and around the area the camera is deployed. System is calibrated to detect to identify inbound and outbound traffic. When it detects a vessel it sends and image to the cloud for user review and determination if any follow-up action is required. The AI output does not serve as a principal basis for decisions or actions related to the definition of High-Impact AI. | Computer Vision | The system uses AI-enhanced technologies and analytics to improve maritime detection and tracking in areas with significant trade and recreational water vessel activity. The system increases situational awareness and responsiveness to potential threats, assisting human operators in identifying and classifying vessels of interest. Agents can define a search area with specific criteria, which is transmitted to sensors. Detected images are analyzed by AI algorithms that filter, detect, and categorize objects into Items of Interest (IoI) or other objects. IoIs are shared across detection systems and tracked seamlessly across multiple sensors, while non-relevant objects are excluded. This approach enhances efficiency in detecting and addressing IoIs, particularly during high-traffic periods, by providing alerts and tracking information to human operators. | Current surveillance technology does not have machine assisted classification of targets on screen, making it harder for human operators to decipher legitimate from illegitimate traffic in times of high volumes of legitimate traffic. Project intends to support human operator identification and classification of potential illicit vessels. Benefits would be more efficient detection and resolution of IOIs, especially during times of high-volume traffic. | Alerts and tracks of detected Items of Interest (IOI)s to human operator workstations. | 28/07/2025 | c) Developed with both contracting and in-house resources | JHU APL | No | Alerts and tracks of detected Items of Interest (IOI)s to human operator workstations. | CBP images of open water ways for vessels. | No | Yes | |||||||||||||
| Department Of Homeland Security | CBP | DHS-401 | Vault Access Log (SPVAA) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case uses facial recognition technology in Seized Property Vault Activity Automation (SPVAA) to create a log of access to a seized property vault. Photos of the CBP personnel accessing the vault are loaded into the application and logs the entrance request, the case number associated with the entrance request, and the individual’s access to the vault. | Computer Vision | Automate identification of personnel entering into secure seized property vault. | The system will enhance monitoring and minimizes the risk of unauthorized access, contributing to stronger security protocols for handling seized property. | Leverages DHS facial matching technologies to provide a match or no match response. | 01/08/2022 | c) Developed with both contracting and in-house resources | NEC | Yes | Leverages DHS facial matching technologies to provide a match or no match response. | Border Crossing Information | Yes | https://www.dhs.gov/collections/privacy-impact-assessments-pia | Yes | https://www.dhs.gov/collections/privacy-impact-assessments-pia | |||||||||||
| Department Of Homeland Security | CBP | DHS-65 | Aircraft Landing Location Predictor (KESTREL) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI predicts aircraft landing locations. The AI outputs do not produce an action or serve as a principal basis for a decision that has the potential to significantly impact the safety of human-life or well-being, climate or environment, critical infrastructure, or strategic assets or resources. The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. | Classical/Predictive Machine Learning | Monitoring activities in the air and maritime domains to identify unusual patterns or behaviors. | The system analyzes potential landing locations for aircraft to support response planning and preparation. | The output provides a visual representation of the top three potential locations, using color-coded indicators to show the likelihood of each outcome. | 01/10/2022 | a) Purchased from a vendor | Maxar | Yes | The output provides a visual representation of the top three potential locations, using color-coded indicators to show the likelihood of each outcome. | Data is provided from a surveillance system in the form of real-time messages on detected tracks within the system. | Yes | Yes | |||||||||||||
| Department Of Homeland Security | CBP | DHS-81 | Passport Anomaly Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. The AI output is an assessment of passport validity in response to an officer's request for passport validation. This result could be related to an inconsistency or abnormality in a passport's pattern. This is a tool available to CBP officers for confirming the validity of a passport. This result is used to notify the CBP officer that a passport may require review, as it may be part of a newly released sequence, may be invalid, or even possibly fraudulent. This is only one piece of information provided to CBP Officers during the normal course of their duties. The officers would use any results provided to research the validity of the passport through other sources. | Classical/Predictive Machine Learning | The Passport Anomaly Model addresses challenges stemming from the lack of formal notification regarding updates to passport series, such as issuance of new series or expiration of old ones. By analyzing historical trends, the model evaluates whether a passport exhibits typical or atypical characteristics and alerts officers when further scrutiny may be warranted. This capability enhances the integrity of travel document verification by enabling CBP personnel to conduct thorough and efficient reviews, ensuring security and accuracy in the inspection process. | The model assists CBP personnel in passenger targeting and vetting by analyzing anomalies in Electronic System for Travel Authorization (ESTA) and non-ESTA country-specific traveler passports to improve the accuracy of matching and streamlining the screening process by reducing errors and enhancing the identification of high-risk travelers. | The model’s outputs are integrated into the Advanced Targeting System (ATS) application, delivering real-time results that assist CBP personnel in detecting passport anomalies and potentially fraudulent documents. | 01/07/2017 | b) Developed in-house | ManTech | Yes | The model’s outputs are integrated into the Advanced Targeting System (ATS) application, delivering real-time results that assist CBP personnel in detecting passport anomalies and potentially fraudulent documents. | This model leverages data provided by air carriers within the Advance Passenger Information System (APIS) and Electronic System for Travel Authorization (ESTA). | Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | Age | Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | ||||||||||
| Department Of Homeland Security | CBP | DHS-86 | Agriculture Commodity Model (AGC) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not produce an action or serve as a principal basis for a decision that has the potential to significantly impact the safety of human-life or well-being, climate or environment, critical infrastructure, or strategic assets or resources. The use case identifies agricultural pest risk in cargo shipments entering the United States. If a cargo shipment is identified as high-risk for pest infestation, it is prioritized for inspection. The model does not track or identify individuals. | Classical/Predictive Machine Learning | AI/ML Model aids CBP personnel in accurately and efficiently identifying cargo shipments at risk for agricultural pests in compliance with APTL's agricultural monitoring program. Due to the excessive volume and velocity of inbound cargo shipments, CBP personnel cannot possibly evaluate every shipment for risk. AI/ML models assist in performing a greater depth of risk assessment across all inbound cargo shipments to identify the most likely ones that would require additional attention, analysis and possible examination. | The AGC Model uses data analytics and risk indicators to prioritize inspections and allocate resources effectively. This proactive approach helps protect the U.S. food supply and agricultural economy while facilitating legitimate trade. | CBP agriculture specialists use the AGC Model's risk assessment outputs, such as risk scores to prioritize further screening of cargo shipments. | 01/07/2022 | b) Developed in-house | Yes | CBP agriculture specialists use the AGC Model's risk assessment outputs, such as risk scores to prioritize further screening of cargo shipments. | This model leverages data provided by carriers within the Automated Commercial Environment (ACE), as well as transformations of that data within the Automated Targeting System (ATS). | Yes | Yes | ||||||||||||||
| Department Of Homeland Security | CBP | DHS-95 | Trade Entity Risk Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The Trade Entity Risk model/tool enhances cargo predictive threat models by providing a comprehensive risk profile that aggregates historical trade entity transactions, trading partner relationships, reviews, examinations, and violations (within CBP data holdings) to create quantifiable risk measures for all trade entities. The AI model serves as an input to larger AI/ML cargo risk targeting models to better assess cargo threats and inform focus areas for trade targeting. Its outputs are not directly shared with users or operators and the output does not serve as a principal basis for decision or actions. | Classical/Predictive Machine Learning | The need to continuously assess and identify trade entity risk to help better assess cargo threats. | The Trade Entity Risk model enhances existing predictive threat models by compiling a risk profile that includes historical transaction data, relationships with trading partners, and relevant compliance information. This aggregated data helps create measurable risk indicators for trade entities. | The calculated risk measures produced by the Trade Entity Risk model can be integrated into broader AI and machine learning systems to improve the evaluation of cargo-related threats. This output supports the standardization of trade entity risk, facilitating better data development for future predictive models. | 15/07/2025 | b) Developed in-house | ManTech | Yes | The calculated risk measures produced by the Trade Entity Risk model can be integrated into broader AI and machine learning systems to improve the evaluation of cargo-related threats. This output supports the standardization of trade entity risk, facilitating better data development for future predictive models. | This model leverages data provided by carriers within the Automated Commercial Environment (ACE), as well as transformations of that data within the Automated Targeting System (ATS). | Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | |||||||||||
| Department Of Homeland Security | ICE | DHS-125 | Investigative Prioritization Aggregator | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case provides more efficient data processing for HSI personnel. The AI output may be used to produce investigative insights in the form of data, information leads or connections that HSI personnel can use to inform investigations, but the output itself is data preparation and organization so HSI personnel can produce those leads when combining the AI output with the personnel’s expertise and other relevant investigative data and information. Personnel may use these insights for law enforcement purposes in ongoing investigations with existing targets to assist in activities such as producing risk assessments about individuals or identifying criminal suspects; however, all insights are reviewed and validated by both personnel and supervisors before being included in any official case management system, and do not serve as a principal basis for law enforcement action or decision. Any enforcement decisions related to these insights are outcomes of the full Federal Investigation Process involving verifying any insights as evidence (including validating AI-translated material by a certified interpreter), presentation to a U.S. Attorney’s Office and potentially a District Court judge, decision to prosecute, judicial review, and trial and sentencing. | Classical/Predictive Machine Learning | This use case intends to solve the problem of overwhelming data volumes that make it difficult for HSI personnel to prioritize high-value targets in criminal investigations. | The sheer volume of data associated with investigations often overwhelms human capabilities, making it challenging for HSI personnel to analyze evidence and identify key players in criminal networks. Currently, there is no effective mechanism to quantify the level of evidence related to a particular subject or entity, or to determine which actors within a network are the most influential. This is particularly critical in the context of the counter-opioid/fentanyl mission, where timely and accurate intelligence is essential. To address this challenge, this project utilizes machine learning to assign point values to data, enabling the scoring of information associated with a given selector, such as a phone number or legal name. This scoring system helps to understand the importance of an entity to investigations and the potential consequences of removing or neutralizing that entity. By doing so, HSI personnel can focus on high-priority targets and associated criminal networks, ultimately enhancing their ability to disrupt and dismantle these threats. | The output is scored entity data (such as a phone number or legal name). This scoring system helps to understand the importance of an entity to investigations and the potential consequences of removing or neutralizing that entity. | 01/02/2024 | c) Developed with both contracting and in-house resources | Sandia National Laboratories | Yes | The output is scored entity data (such as a phone number or legal name). This scoring system helps to understand the importance of an entity to investigations and the potential consequences of removing or neutralizing that entity. | Law Enforcement Sensitive (LES) investigative data. | Yes | https://www.dhs.gov/sites/default/files/2025-06/25_0618_priv_pia-ice-055-raven-appendix-update.pdf | Yes | https://www.dhs.gov/sites/default/files/2025-06/25_0618_priv_pia-ice-055-raven-appendix-update.pdf | |||||||||||
| Department Of Homeland Security | ICE | DHS-2427 | Translation and Transcription for Investigative Data | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case provides more efficient data processing for HSI personnel. The AI output may be used to produce investigative insights in the form of data, information leads or connections that HSI personnel can use to inform investigations, but the output itself is data preparation and organization so HSI personnel can produce those leads when combining the AI output with the personnel’s expertise and other relevant investigative data and information. Personnel may use these insights for law enforcement purposes in ongoing investigations with existing targets to assist in activities such as producing risk assessments about individuals or identifying criminal suspects; however, all insights are reviewed and validated by both personnel and supervisors before being included in any official case management system, and do not serve as a principal basis for law enforcement action or decision. Any enforcement decisions related to these insights are outcomes of the full Federal Investigation Process involving verifying any insights as evidence (including validating AI-translated material by a certified interpreter), presentation to a U.S. Attorney’s Office and potentially a District Court judge, decision to prosecute, judicial review, and trial and sentencing. | Natural Language Processing (NLP) | This use case intends to solve the problem of the time-consuming process of translating and transcribing data for investigative purposes. | HSI investigators often encounter data from various sources, including legal and administrative processes, enforcement actions, and open-source materials, in languages other than English. To unlock the value of this data, it must be translated into English before further analysis can be conducted. The Translation and Transcription Service leverages neural machine translation (NMT) models for text translation and automatic speech recognition (ASR) and deep neural network (DNN) models with normalization for voice-to-text transcription. This innovative approach enables users to quickly triage large datasets and identify key information relevant to investigations. Any data deemed critical for court proceedings is then submitted to certified human translators for final review, ensuring that government resources are allocated efficiently and only used for necessary translations and transcriptions. | The Translation and Transcription Service leverages neural machine translation (NMT) models for text translation and automatic speech recognition (ASR) and deep neural network (DNN) models with normalization for voice-to-text transcription. | 01/02/2024 | c) Developed with both contracting and in-house resources | Booz Allen | Yes | The Translation and Transcription Service leverages neural machine translation (NMT) models for text translation and automatic speech recognition (ASR) and deep neural network (DNN) models with normalization for voice-to-text transcription. | AI models within the use case are not trained on agency data. Open-source models (i) Whisper is trained on 680K hours of multilingual and multitask supervised data collected from the web, (ii) No Language Left Behind (NLLB) is trained on a combination of publicly available datasets (additional information available in Section 5 of Meta’s NLLB whitepaper: https://research.facebook.com/file/585831413174038/No-Language-Left-Behind--Scaling-Human-Centered-Machine-Translation.pdf). | Yes | https://www.dhs.gov/sites/default/files/2025-06/25_0618_priv_pia-ice-055-raven-appendix-update.pdf | Yes | https://www.dhs.gov/sites/default/files/2025-06/25_0618_priv_pia-ice-055-raven-appendix-update.pdf | |||||||||||
| Department Of Homeland Security | ICE | DHS-2511 | AI-Assisted Audio/Video Redaction | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI-Assisted Audio/Video Redaction use case does not meet the definition of a High-Impact category under OMB M-25-21 due to its narrowly defined scope, reliance on human oversight, and non-decision-making role. This tool is designed to assist Homeland Security Investigations (HSI) by partially automating the redaction of audio and video evidence, such as detecting and obscuring faces, objects, or sensitive information (e.g., license plates, PII), to protect individuals’ identities during investigations and legal proceedings. Importantly, the tool does not perform 1:1 facial matching or identification, and its outputs are strictly limited to redactions, which are subject to comprehensive human review and editing before finalization. This ensures that the AI’s role is supportive rather than determinative, with no direct impact on investigative decisions or legal outcomes. | Computer Vision | This use case intends to solve the problem of the labor-intensive process of redacting audio and video evidence. | The AI-Assisted Audio/Video Redaction tool is used to reduce the manual effort required to redact audio and video evidence used during an investigation and subsequent legal proceedings. | The AI output in this use case are redactions to the media file. Users will further edit the redacted media prior to exporting the final redacted file to ensure completeness. Homeland Security Investigations conducts a human review of each frame within redacted files prior to distribution. | 01/07/2024 | a) Purchased from a vendor | Case Guard | No | The AI output in this use case are redactions to the media file. Users will further edit the redacted media prior to exporting the final redacted file to ensure completeness. Homeland Security Investigations conducts a human review of each frame within redacted files prior to distribution. | All training sets used for the model are from public and private collections of images. AI models are trained using a combination of real-world and synthetic datasets collected from publicly available sources. These datasets are curated to represent a broad range of conditions, including edge cases such as occlusions and poor lighting, to improve detection accuracy across varied scenarios. | Yes | https://www.dhs.gov/sites/default/files/2024-03/24_0307_priv_pia-ice-066a-pia-update.pdf | No | https://www.dhs.gov/sites/default/files/2024-03/24_0307_priv_pia-ice-066a-pia-update.pdf | |||||||||||
| Department Of Homeland Security | ICE | DHS-2517 | Dark Web Threat Intelligence for Cyber Investigations | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case provides a more efficient way to process, review, summarize, translate, and analyze dark web data for use in HSI investigations. The AI output may be used to produce investigative insights, but the output itself is supporting information for HSI personnel to use for ease of review, for further analysis, and to produce leads. Insights derived from translated data allow investigators to identify the most relevant data that, if deemed critical for court proceedings, can be submitted to certified human translators for final review. Personnel may use investigative insights for law enforcement actions such as identifying criminal suspects; however, all insights are reviewed and validated by both personnel and supervisors before being included in any official case management system, and do not serve as a principal basis for any law enforcement action or decision. Any enforcement decisions related to these insights are outcomes of the full Federal Investigation Process involving verifying any insights as evidence (including validating AI-translated material by a certified interpreter), presentation to a U.S. Attorney’s Office and potentially a District Court judge, decision to prosecute, judicial review, and trial and sentencing. | Generative AI | This use case intends to solve the problem of quickly identifying and summarizing relevant cyber threat data from the dark web, which can be difficult and time-consuming for analysts. | The summarized information helps analysts quickly identify threat actors, trends, and illicit platforms, enabling them to prioritize their investigative efforts. The data analysis and extraction techniques connect related information across data holdings and generate metadata to help analysts review and search results. The translation capability helps analysts identify non-English data responsive to an investigation and saves time otherwise spent translating non-responsive data. | The system’s AI outputs are concise summaries of search results, English translations of non‑English data, and metadata that highlights potential connections and leads. These outputs assist analysts in quickly identifying key findings while retaining access to the original data for deeper analysis and verification. All outputs are part of a broader investigative process and are not used as the sole basis for enforcement actions. | 01/08/2024 | a) Purchased from a vendor | Law Enforcement Sensitive (LES) | No | The system’s AI outputs are concise summaries of search results, English translations of non‑English data, and metadata that highlights potential connections and leads. These outputs assist analysts in quickly identifying key findings while retaining access to the original data for deeper analysis and verification. All outputs are part of a broader investigative process and are not used as the sole basis for enforcement actions. | Yes | No | ||||||||||||||
| Department Of Homeland Security | ICE | DHS-2547 | Entity Resolution for Global Trade Data | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Although the use case falls into a presumed high-impact category related to law enforcement investigations, it does not meet the high-impact definition as its outputs do not directly serve as a principal basis for enforcement actions or regulatory decisions. Instead, the AI-generated knowledge graph provides a foundation for further human-led investigation, requiring source validation through separate processes. This delineation ensures that the AI output remains a supportive rather than a determinative factor, disqualifying it from the high-impact AI system category. | Classical/Predictive Machine Learning | The AI is intended to solve the problem of investigators having to manually piece together fragmented global trade and supply chain data from many sources, which makes it difficult to see relationships among entities and identify potential leads in transnational criminal investigations. | This platform improves Homeland Security Investigations' ability to validate existing information, understand complex supply chain networks, and to generate leads in transnational criminal investigations. | The platform uses AI Machine Learning (ML) models for data collection, data structuring, entity resolution, network analysis, and risk assessment. These ML processes contribute to the platform’s output, a dynamic knowledge graph and user-friendly interface for global supply chain research. | 10/10/2024 | a) Purchased from a vendor | Altana | No | The platform uses AI Machine Learning (ML) models for data collection, data structuring, entity resolution, network analysis, and risk assessment. These ML processes contribute to the platform’s output, a dynamic knowledge graph and user-friendly interface for global supply chain research. | The platform’s machine learning models were trained and evaluated by the vendor using its own datasets, which are derived from public and commercially sourced trade and logistics records (such as customs declarations, bills of lading, and shipment data from air, rail, and sea carriers). Homeland Security Investigations (HSI) does not provided any ICE or HSI investigatory data to the vendor to develop, training, test, or operate the platform models. | No | No | |||||||||||||
| Department Of Homeland Security | ICE | DHS-2575 | Blockchain Analysis | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Despite falling into a presumed category of high-impact AI, this use case does not meet the definition because the AI’s outputs serve primarily as inputs to an investigative process rather than making legally binding or material decisions itself. HSI investigators review, validate, and contextualize the AI-generated outputs before integrating them into any official case management system. Enforcement decisions and outcomes arise from a full investigative process that includes judicial review and other safeguards, thereby distancing the AI outputs from direct impact on civil liberties or privacy | Classical/Predictive Machine Learning | By utilizing this AI-powered blockchain analysis platform, investigators can uncover hidden connections across blockchain networks, detect illicit activities, and significantly reduce the time required for manual analysis, enhancing HSI’s ability to combat transnational crime effectively. | The use of AI within TRM Labs improves HSI’s ability to uncover hidden connections across blockchain ecosystems, detect illicit behaviors, and reduce the time required for manual analysis. | The platform’s outputs include confidence scores for address attributions, risk flags based on behavioral typologies, identification of hidden connections across blockchain ecosystems, and plain-language summaries of smart contracts. | 16/08/2022 | a) Purchased from a vendor | TRM Labs | No | The platform’s outputs include confidence scores for address attributions, risk flags based on behavioral typologies, identification of hidden connections across blockchain ecosystems, and plain-language summaries of smart contracts. | The platform uses vendor AI models trained and tested on public blockchain ledger data (public/external), as well as proprietary data, internal attribution and scoring data, behavioral data, network data, and synthetic data. | No | No | |||||||||||||
| Department Of Homeland Security | ICE | DHS-2578 | Enhanced Lead Identification and Targeting | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | While ELITE provides actionable data to ERO officers, its outputs are limited to normalized address data and do not serve as a principal basis for decisions or actions with legal, material, binding, or significant effects on individuals. ERO officers review and validate the AI-driven outputs before determining actions, ensuring human oversight and additional verification steps. Furthermore, enforcement decisions are based on the full investigative process, which includes human analysis and validation of the source of AI outputs. As such, the AI system's role is limited to data extraction and normalization, rather than serving as a primary basis for enforcement actions. | Generative AI | The AI is intended to solve the problem of unstructured, hard‑to‑read address information in records like rap sheets and warrants, which makes it difficult and time‑consuming for Enforcement and Removal Operations officers to extract accurate addresses and build usable enforcement leads. | The integration of AI enhances data extraction capabilities and decreases the time spent on manual data normalization tasks. This provides Enforcement and Removal Operations officers with higher-quality leads and enables them to make better-informed decisions. | The outputs of Enhanced Leads Identification & Targeting for Enforcement (ELITE) are enriched leads that include AI-extracted addresses. Enforcement and Removal Operations officers review these leads to determine which are actionable and then share actionable leads across offices and areas of responsibility to coordinate enforcement operations. | 07/06/2025 | a) Purchased from a vendor | Palantir | Yes | The outputs of Enhanced Leads Identification & Targeting for Enforcement (ELITE) are enriched leads that include AI-extracted addresses. Enforcement and Removal Operations officers review these leads to determine which are actionable and then share actionable leads across offices and areas of responsibility to coordinate enforcement operations. | The system uses commercially available large language models trained on the public domain data by their providers. The use of LLMs is limited to address extraction from criminal records such as rap sheets and warrants. ICE data was not used during the design, development, or training phases of the AI models. During operation, the AI models interact with ICE production data from multiple sources, including data from ICE’s Enforcement Integrated Database (EID). | Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-eid-may2019.pdf | Yes | https://www.dhs.gov/sites/default/files/publications/privacy-pia-ice-eid-may2019.pdf | |||||||||||
| Department Of Homeland Security | ICE | DHS-407 | Biometric Check-in for ATD-ISAP (SmartLINK) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Facial verification is only one option for check-in. If the remote check-in fails, either because it was unable to verify the match between the user and their previously taken photo, or because of other potential issues (poor lighting, camera/phone malfunction, etc...), an officer will manually review the check-in photo against the previously taken photos. If that fails, the user can schedule an in-person check-in at their local ERO office. Therefore, the output of AI (facial verification for a remote check-in) is not the primary basis for a decision or action that would affect the individual's rights or safety. It is a convenience to help save time for both the user and officers. | Computer Vision | This use case intends to solve the problem of the need for frequent in-person check-ins for participants in the ATD-ISAP program. | ISAP Biometric Monitoring App is a technology option that allows participants to report in using a smartphone. This app verifies a participant’s identity, determine their location, and quickly collect status change information. The app adds functionality not available with telephonic and is less intrusive than a GPS unit. ISAP monitoring app limits in-person interactions of routine check-ins, allowing more time to be allocated to non-compliant participants, complex removal proceedings cases and docket management. | There are two outputs related to using ISAP Biometric Monitoring App. Either a participant “passes” (biometric match) or the photo is moved to a “pending review” status. In either scenario, a human can evaluate the response. | 01/02/2018 | a) Purchased from a vendor | BI | Yes | There are two outputs related to using ISAP Biometric Monitoring App. Either a participant “passes” (biometric match) or the photo is moved to a “pending review” status. In either scenario, a human can evaluate the response. | The training process includes datasets from diverse facial images and real-world environments. The images are preprocessed to normalize variables like lighting and facial expressions, making them suitable for facial matching. Data techniques, such as rotation and scaling, are also applied to alleviate the need for additional data collection. The models are trained to extract facial features and to match them accurately. | Yes | https://www.dhs.gov/sites/default/files/2023-08/privacy-pia-ice062-atd-august2023.pdf | No | https://www.dhs.gov/sites/default/files/2023-08/privacy-pia-ice062-atd-august2023.pdf | |||||||||||
| Department Of Homeland Security | MGMT | DHS-2434 | User and Entity Behavior Analytics (UEBA) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The CVAS sub-system inside the VIEW system of record is not a high-impact use case because the Al's output from CVAS does not actually "serve as a principal basis for" the relevant type of agency action or decision. The collated information supports decisions about staffing priorities but is not used to make security-related or employment-related decisions. The use case also does not perform workplace monitoring or surveillance. | Classical/Predictive Machine Learning | The Continuous Vetting Analytics Service (CVAS) sub-system in the Vetting Identities for an Enterprise Workforce (VIEW) system of record will solely be used to aggregate information provided by or authorized to be collected by an DHS applicant and then presents the aggregated information in a structured manner and present it to personnel security adjudicative specialist to help them prioritize workload and improve review processes. | Enhanced Threat Detection: Identify patterns in user behaviors that deviate from normal baselines, signaling potential insider threats or security risks. Continuous Risk Assessment: Move from static vetting to a continuous vetting process by monitoring real-time activities and interactions with secure information. Improved Incident Response: Enable rapid responses to high-risk behaviors, escalating alerts for timely intervention by security personnel." | Behavioral Baseline Modeling: Develop a baseline for each individual based on regular access patterns, network usage, and interactions with classified data or secure areas. Anomaly Detection: Employ machine learning models to detect deviations from established baselines, such as unusual access times, atypical access to high-sensitivity resources, or excessive data downloads. Risk Scoring: Assign a risk score to each user based on observed anomalies, factoring in historical behavior, job role, and access level, allowing security teams to prioritize investigations. Automated Alerts & Reporting: Generate automated alerts for high-risk behaviors or patterns of concern and deliver timely reports to personnel security teams for further investigation." | 08/07/2023 | a) Purchased from a vendor | CANDA Solutions | Yes | Behavioral Baseline Modeling: Develop a baseline for each individual based on regular access patterns, network usage, and interactions with classified data or secure areas. Anomaly Detection: Employ machine learning models to detect deviations from established baselines, such as unusual access times, atypical access to high-sensitivity resources, or excessive data downloads. Risk Scoring: Assign a risk score to each user based on observed anomalies, factoring in historical behavior, job role, and access level, allowing security teams to prioritize investigations. Automated Alerts & Reporting: Generate automated alerts for high-risk behaviors or patterns of concern and deliver timely reports to personnel security teams for further investigation." | Personally Identifiable Information (PII), Sensitive Sensitive Personally Identifiable Information (SPII), Clearance and Background Investigation Data: Data from security clearances, background investigations, and adjudication records. | Yes | Yes | |||||||||||||
| Department Of Homeland Security | USCIS | DHS-14 | Biometrics Enrollment Tool (BET) Fingerprint Quality Check | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. This tool simply identifies whether a fingerprint collected is of sufficient quality to pass the FBI fingerprint check process, ultimately maximizing the number of successful FBI submissions while minimizing the number of fingerprint recaptures necessary. This quality assurance step is one task in a series of adjudication activities but is not determinative of the overall adjudication decision. The tool saves personnel time and resources while enhancing customer experience by helping to ensure that only quality fingerprints are passed forward for matching against the FBI Identity History Summary Check. Results of FBI fingerprint checks are subsequently reviewed by a human as part of the immigration adjudicative process. | Classical/Predictive Machine Learning | This effort aims to maximize the number of successful FBI submissions while minimizing the number of fingerprint recaptures necessary. The output is a Numerical Fingerprint Quality score, which is compared against fingerprint quality thresholds (per finger and per set of fingerprints) to align with FBI specifications. | BET assists in determining if the fingerprint taken is good enough quality to pass the FBI fingerprint check process. It provides immediate feedback when a set of prints is likely to be rejected by the FBI by incorporating machine learning models into the BET application. The FBI will not disclose their quality grading criteria for fingerprints, leaving BET with the responsibility of determining quality to prevent unnecessary secondary encounters with applicants. | Numerical Fingerprint Quality score, which is compared against fingerprint quality thresholds (per finger and per set of fingerprints) to align with FBI specifications | 01/01/2024 | c) Developed with both contracting and in-house resources | Pluribus Digital | Yes | Numerical Fingerprint Quality score, which is compared against fingerprint quality thresholds (per finger and per set of fingerprints) to align with FBI specifications | Internal data from BET data capture into Databricks lakehouse, numerical values representing fingerprint quality scores determined by the BET system outside of the AI workflow. | Yes | https://www.dhs.gov/sites/default/files/2024-11/24_0930_priv_pia-dhs-uscis-cpms-060d.pdf | Yes | https://www.dhs.gov/sites/default/files/2024-11/24_0930_priv_pia-dhs-uscis-cpms-060d.pdf | |||||||||||
| Department Of Homeland Security | USCIS | DHS-180 | Automated Name and Date of Birth (DOB) Harvesting Service | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | The use case improves case processing efficiency by reducing the amount of time USCIS staff must spend to manually find aliases and dates of birth (DOBs) in existing records of an individual. The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. The use case increases efficiency of tasks associated with accurate and timely identification, analysis, and review of biographical information needed for adjudication. The AI outputs are suggested aliases and DOBs related to the individual query, which USCIS staff must review to accept, reject, or ignore the suggested information. The AI outputs reduce the amount of adjudicative time spent manually harvesting aliases and DOBs. The use case increases efficiency of tasks associated with reviewing existing records for adjudicating requests for immigration benefits. Completing such adjudications are not dependent on the use case however lack of this tool would significantly increase human processing times and potentially reduce the accuracy of information consulted during the human review process. | Classical/Predictive Machine Learning | Adjudicators spend significant amount of time manually harvesting aliases and dates of birth (DOBs) from identity history summary (IdHS) report attached to the ELIS case as part of the Manual Name Harvesting Task during case processing. | To reduce the amount of adjudicative time spent manually harvesting aliases and dates of birth (DOBs) from identity history summary (IdHS) report attached to the ELIS case as part of the Manual Name Harvesting Task during case processing. | Suggested Names and DOBs from IdHS record. | 27/06/2022 | c) Developed with both contracting and in-house resources | SAIC and DV United | Yes | Suggested Names and DOBs from IdHS record. | Training and evaluation for ANH was performed using a large set of previously annotated IdHS records (raw text) in a secure environment separate from our standard development environment. The system uses Spark NLP and distilbert embeddings for the model input, so no raw text from these files is stored or accessible from the final model or associated logged artifacts. Annotations were sourced from results of previously completed manual name harvesting tasks. | Yes | https://www.dhs.gov/publication/dhsuscispia-056-uscis-electronic-immigration-system-uscis-elis | Yes | https://www.dhs.gov/publication/dhsuscispia-056-uscis-electronic-immigration-system-uscis-elis | |||||||||||
| Department Of Homeland Security | USCIS | DHS-189 | ELIS Card Photo Validation Service | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case supports beneficiaries submitting an e-filed I-765 via myUSCIS to apply for employment authorization. These applications include a digital ID photo of the applicant, which will be printed on their Employment Authorization Document (EAD) card if the application is accepted. The use case determines whether a user-uploaded ID photo is suitable for use on an EAD card, and notifies the submitter if it detects a potential quality issue with the photo. (SEE DHS CAIO SUPER MEMO FY24) --- The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. The AI outputs are a real-time advisory message to the submitter of the photo that it is of insufficient quality for card production at which point the submitter may choose to resubmit or not. While adjudication of EADs is not dependent on this use case, this tool enhances efficiency and customer service by minimizing the number of cards failing in production and delays related to subsequent requests for information. (SEE USCIS REQUEST FOR DHS CAIO DETERMINATION FY24) | Computer Vision | The card photo validation solution was developed to validate each submitted photo against a set of business-defined requirements in near-real-time in order to eliminate rejections/RFEs for the user uploaded photos. | Ensuring beneficiary uploaded photos meet USCIS requirements. This helps ensure photos are correct before making ID cards, saving adjudicator time and avoiding delays. | Response back to user based on the pre-defined quality checks if the uploaded photo meets USCIS requirements. Users still have the option to ignore the warnings and upload the photo. | 15/03/2022 | c) Developed with both contracting and in-house resources | SAIC and DV United | Yes | Response back to user based on the pre-defined quality checks if the uploaded photo meets USCIS requirements. Users still have the option to ignore the warnings and upload the photo. | Initial face detection is performed by pretrained dlib embeddings as implemented in opencv. Validation tests requiring object detection (headwear and eyeglasses) is performed using custom fine-tuning from Detectron2, an open-source object detection model. Training and testing data for these object detection problems was sourced from a combination of public domain face detection datasets and USCIS production data passport photos. | Yes | https://www.dhs.gov/publication/dhsuscispia-056-uscis-electronic-immigration-system-uscis-elis | Yes | https://www.dhs.gov/publication/dhsuscispia-056-uscis-electronic-immigration-system-uscis-elis | |||||||||||
| Department Of Homeland Security | USCIS | DHS-2543 | AI Security and Monitoring | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case collates information provided by or authorized to be collected by an employee as part of network security process. The collated information supports decisions about staffing priorities but is not used to make security-related or employment-related decisions. | Generative AI | Organizations adopting AI face a number of risks from data leaks, shadow AI, and unsecured outputs. | The gateway will secure enterprise AI use by discovering shadow AI, enforcing guardrails, and preventing data spills. Its benefits include improved compliance, reduced risk, real time protection, and enhanced visibility. Organizations are empowered to securely and efficiently adopt AI. | The AI System outputs risk alerts, compliance reports, and audit logs. It predicts threats, recommends policy adjustment, and semi-autonomously enforces guardrails. It secures data and AI usage via blocking, masking, and context-aware decisions. | 31/03/2025 | a) Purchased from a vendor | Lasso | Yes | The AI System outputs risk alerts, compliance reports, and audit logs. It predicts threats, recommends policy adjustment, and semi-autonomously enforces guardrails. It secures data and AI usage via blocking, masking, and context-aware decisions. | Test data formatted in a fashion to simulate PII/SPII/etc to ensure that the solution properly detects. | Yes | Yes | |||||||||||||
| Department Of Homeland Security | USCIS | DHS-372 | User Entity and Behavior Analytics (UEBA) for Security Operations (SecOps) Anomaly Identification | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case collates information provided by or authorized to be collected by an employee as part of network security process. The collated information supports decisions about staffing priorities but is not used to make security-related or employment-related decisions. | Classical/Predictive Machine Learning | User Entity and Behavior Analytics (UEBA) assists USCIS Security Operations (SecOps) in identifying behavioral anomalies that most likely indicate malicious intent or heightened risk associated with user identities and endpoint hosts accessing the USCIS network. The analytics provide risk scoring, which helps USCIS SecOps to prioritize highest risk incidents first. | UEBA's purpose is to review USCIS system logs to determine when an entity is performing actions that are anomalous. An entity can be classified as a workstation, server or an internal USCIS system account. The UEBA ingests logs from systems to perform analytics based off of models that are manually created and maintained. UEBA uses the models to apply a risk score to the entity which the risk score is then used to create a case (or ticket) for Security Operations analyst review. The AI reviews the action of the analyst to adjust the risk scoring for future events. Output would assist in prioritizing cyber events for further manual investigation. | Output of the Machine Learning is an alert with all artifacts for the SOC to investigate. The alert is used as a recommendation to prioritize specific investigations in the SOC ticket queue. | 17/08/2024 | c) Developed with both contracting and in-house resources | Gurucul | Yes | Output of the Machine Learning is an alert with all artifacts for the SOC to investigate. The alert is used as a recommendation to prioritize specific investigations in the SOC ticket queue. | Data used to tune models is USCIS internal system logs. | No | Yes | |||||||||||||
| Department Of Homeland Security | USCIS | DHS-56 | Person-Centric Identity Services Information Compilation Check | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | This use case verifies the ongoing accuracy of information compiled from within the Person-Centric Identity Services (PCIS). It identifies which records from within PCIS best match search criteria to support case processing. ------ The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. The use case compiles records from across a variety of USCIS systems to provide a comprehensive history of a person’s interaction with USCIS. The output of this can be visualized through a report or dashboard to assist with case review ensuring access to helpful and accurate records. Adjudicators review the outputs of this use case, alongside other information and insights, to process a case and make a final determination. The adjudication process can be conducted without this tool, however, doing so would significantly increase the time and effort required to process immigration requests. | Classical/Predictive Machine Learning | The output of the use case is the numerical confidence score which is used to determine the validity of the A-number presented in search results. The confidence score identifies which records from within PCIS best match search criteria for an A-number. | The aim of this use case is to leverage machine learning to test the accuracy of PCIS to identify and manage associations between individuals and their assigned A-numbers, which is a unique 7, 8, or 9 digit number assigned to a noncitizen by DHS. The A-Number plays a critical role in surfacing of a person and all their associated records from across PCIS. | Numerical likelihood score which is used to determine the validity of the A# presented. Likelihood scores are subjected to a high threshold (.98, maximum 1) to assess whether the A# presented belongs to the individual. | 01/07/2022 | c) Developed with both contracting and in-house resources | MetroIBR | Yes | Numerical likelihood score which is used to determine the validity of the A# presented. Likelihood scores are subjected to a high threshold (.98, maximum 1) to assess whether the A# presented belongs to the individual. | USCIS-only data derived from 7 form-processing source systems including C3, ELIS, CPMS, GLOBAL, CIS2, AR-11, CAMINO. | Yes | https://www.dhs.gov/sites/default/files/2022-12/privacy-pia-uscis-pia087-pcis-december2022.pdf | Yes | https://www.dhs.gov/sites/default/files/2022-12/privacy-pia-uscis-pia087-pcis-december2022.pdf | |||||||||||
| Department Of Homeland Security | CBP | DHS-2366 | CBP Careers Bot - Leo | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The Natural Language Processing (NLP) chatbot is intended to help visitors to the CBP careers site navigate complex and extensive career resources quickly and easily. By providing a guided, interactive experience, the chatbot simplifies access to the most relevant career information, on a user-by-user basis. Additionally, the chatbot can direct users to CBP recruiters and recruitment events, strengthening the agency's recruitment network and fostering more direct engagement with prospective candidates. | Visitors to the U.S. Customs and Border Protection (CBP) careers website can engage with a Natural Language Processing (NLP) based chat bot to help access CBP career related information and drive users to take the next action such as contacting a recruiter, attending a career event, or applying for a CBP career to help access CBP career related information and drive users to take the next action. These data-driven responses will allow for more natural, conversational interactions to increase usability and accuracy in provided information. | The NLP-chatbot will provide natural language responses to user queries. | 30/09/2025 | c) Developed with both contracting and in-house resources | Salesforce Einstein | Yes | The NLP-chatbot will provide natural language responses to user queries. | User input is categorized and captured in Salesforce to refine the chatbot's interpretation of future inputs. | No | No | ||||||||||||||
| Department Of Homeland Security | CBP | DHS-2373 | CBP Employee Experience | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Parsing through voluminous amounts of qualitative and quantitative data from recruit/applicant/employee survey data. The technology provides actionable intelligence for senior leaders to better improve the recruit/applicant/employee experience, thereby increasing both yield rates and resiliency. | CBP Employee Experience is intended to ingest, interpret, and operationalize employee experience data originating from survey results and operational data to deliver real time insights related to the experience of USBP recruits, applicants, and employees. These metrics inform HRM leadership of opportunities for process improvement in order to meet congressionally mandated hiring targets and retain a qualified workforce. | Real time insights related to the experience of USBP recruits, applicants, and employees. | 01/10/2023 | c) Developed with both contracting and in-house resources | Medallia | Yes | Real time insights related to the experience of USBP recruits, applicants, and employees. | The platform uses a supervised machine learning model trained on baseline non-governmental data, which is regularly updated and tested for accuracy. CBP can further train the model by correcting sentiment tags, allowing the system to learn from feedback through both hard and soft rules. | No | No | ||||||||||||||
| Department Of Homeland Security | CBP | DHS-2451 | Position Description Generation and Evaluation | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Generative AI | Federal agencies often face challenges in creating accurate, consistent, and well-structured position descriptions (PDs) due to limited resources, the time-intensive nature of the classification process, and varying levels of expertise among HR staff and Hiring Managers. Inaccurate or poorly written PDs can lead to misclassification, legal disputes, grievances, and difficulties attracting qualified candidates, ultimately impacting workforce quality and agency performance. | ClassifAI is designed to streamline and enhance the position description (PD) creation and classification process by leveraging generative AI to produce drafts of accurate, consistent, and standards-compliant PDs. By reducing administrative burdens, improving PD quality, and minimizing classification risks, ClassifAI enables agencies to optimize workforce management, attract top talent, and achieve greater operational efficiency with fewer resources. | ClassifAI generates accurate, standards-compliant drafts of position descriptions (PDs) with tailored classification recommendations, robust and customizable language, and supporting documentation. | 16/05/2025 | c) Developed with both contracting and in-house resources | Starlo and Deloitte | No | ClassifAI generates accurate, standards-compliant drafts of position descriptions (PDs) with tailored classification recommendations, robust and customizable language, and supporting documentation. | Publicly available position descriptions (primarily from the DoD), OPM standards and guidelines (e.g., the OPM classifier’s handbook), and CBP position descriptions. | No | Yes | ||||||||||||||
| Department Of Homeland Security | CBP | DHS-2529 | Global Entry Mobile App Traveler AI Question Answering Service | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Travelers submitting a question to the Global Entry support team. | Faster workflow, faster customer response time, reduce the number of people it takes to address customer concerns, less expensive. | The output is the answer to a traveler's question. The AI model is directed to answer the question from the context information we give it, verbatim. | 16/06/2025 | b) Developed in-house | Yes | The output is the answer to a traveler's question. The AI model is directed to answer the question from the context information we give it, verbatim. | Previous production traveler questions that were sanitized and anonymized, mock traveler questions. | Yes | Yes | |||||||||||||||
| Department Of Homeland Security | CBP | DHS-2530 | ChatCBP | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | Workforce enablement and efficiency. | Improving Operational Efficiency: Automating information retrieval, reducing manual review time, and streamlining workflows will lead to significant time savings and increased productivity for CBP personnel. This translates to cost savings and allows agents to focus on higher-priority tasks. _x000D_ _x000D_ Enhancing Decision-Making: Providing quick and accurate access to relevant information will improve the quality and consistency of decisions across the agency. _x000D_ _x000D_ Increasing Mission Effectiveness: By applying LLM capabilities to critical use cases like hot list review and violation coding, we can enhance accuracy, reduce errors, and improve overall mission success rates. | Generative LLM that will allow users to upload, search, delete, summarize documents, conduct advanced searches, and identify similar language in documents and receive natural language-style outputs in response to their prompts. | 30/07/2025 | b) Developed in-house | No | Generative LLM that will allow users to upload, search, delete, summarize documents, conduct advanced searches, and identify similar language in documents and receive natural language-style outputs in response to their prompts. | Internal CBP document samples are used to test the efficacy and accuracy of performance of chatCBP. | Yes | Yes | |||||||||||||||
| Department Of Homeland Security | CBP | DHS-2704 | GenAI for Document Summarization and Content Generation | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Workforce enablement and efficiency through the implementation of Generative AI / Large Language Models (LLM) | Improving Operational Efficiency: Automating information retrieval, reducing manual review time, and streamlining workflows will lead to significant time savings and increased productivity for CBP personnel. This translates to cost savings and allows agents to focus on higher-priority tasks. Enhancing Decision-Making: Providing quick and accurate access to relevant information will improve the quality and consistency of decisions across the agency. | Generative LLM applications deployed in a stand alone capacity or embedded in existing systems that will allow users to upload, search, summarize documents, conduct advanced searches, and identify similar language in documents and receive natural language-style outputs in response to their prompts. | 01/03/2025 | c) Developed with both contracting and in-house resources | Meta, OpenAI, Google, Anthropic | No | Generative LLM applications deployed in a stand alone capacity or embedded in existing systems that will allow users to upload, search, summarize documents, conduct advanced searches, and identify similar language in documents and receive natural language-style outputs in response to their prompts. | The commercial LLMs used for this use case were trained using a diverse range of publicly available data, including text from books, articles, websites, and other sources and data types. | No | Yes | ||||||||||||||
| Department Of Homeland Security | CBP | DHS-399 | Cyber Threat Analysis (Recorded Future) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The system uses the Recorded Future's platform to streamline CBP Cyber Threat Intelligence (CTI) operations by automating the identification and analysis of relevant cyber threat activity and leverages AI/ML to transform unstructured text into structured data using natural language processing, classify events and entities to prioritize threats, forecast events through predictive modeling, and represent structured knowledge using ontologies. It enables analysts to rapidly assess vulnerabilities in CBP’s IT environment, identify adversary data, and generate cyber risk assessments, providing actionable intelligence to enhance efficiency and support Security Operations and Cyber Risk Management investigations. | Cyber Threat Analysis quickly populates query results when searching against adversary tactics, techniques, and procedures, establishing a threat scorecard. This service can also provide cyber risk scorecards for third party vendors, companies, and organizations. | Actionable intelligence supporting Security Operations Center (SOC) and Cyber Risk Management (CRM) investigations and reports. | 20/06/2024 | a) Purchased from a vendor | Recorded Future | No | Actionable intelligence supporting Security Operations Center (SOC) and Cyber Risk Management (CRM) investigations and reports. | Recorded Future AI is trained on over 10 years of threat analysis from Insikt Group, the company’s threat research division, and is combined with the insights of the Recorded Future Intelligence Graph. | No | No | ||||||||||||||
| Department Of Homeland Security | CBP | DHS-68 | Empty Container Detection Model (Cargo Insights Team) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This enhances border security and optimizes resource allocation for inspections. The model is designed to accurately identify and track empty containers in cargo shipments, preventing errors and fraud in cargo declarations. | The AI improves accuracy, enhances efficiency by prioritizing legitimate containers for inspection, and strengthens security by detecting potential smuggling risks. | The system applies a prediction label alongside a bounding box on record. Officers use this information along with all information provided to determine what, if any, further steps are required. | 24/06/2023 | b) Developed in-house | Yes | The system applies a prediction label alongside a bounding box on record. Officers use this information along with all information provided to determine what, if any, further steps are required. | X-ray images and associated metadata. | No | Yes | |||||||||||||||
| Department Of Homeland Security | CBP | DHS-69 | Commodity Detection Model (Cargo Insights Team) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Computer Vision | This enhances border security and optimizes resource allocation for inspections. Analyze X-Ray images and predict the commodity code, reducing the need for users to manually enter commodity codes. | This project leverages computer vision with object detection and a neural network to analyze X-Ray images and predict the commodity code. | The system applies a prediction label alongside a bounding box on record. Officers use this information along with all information provided to determine what, if any, further steps are required. | 21/01/2025 | b) Developed in-house | Yes | The system applies a prediction label alongside a bounding box on record. Officers use this information along with all information provided to determine what, if any, further steps are required. | X-ray images and associated metadata. | No | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | |||||||||||||
| Department Of Homeland Security | CBP | DHS-94 | Cargo Classification Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The need to improve data classification, enable better machine learning integration, and facilitate nuanced trade entity risk assessments. By categorizing goods based on their descriptions and characteristics, these classifiers help identify potential threats associated with specific cargo types associated with prior violations. | CBP’s Cargo Classification Tool improves trade compliance and enhances cargo risk assessment by streamlining the classification of goods, enabling better integration with machine learning systems, and refining entity risk evaluations. It identifies potential threats linked to specific cargo types and prior violations by categorizing goods based on their descriptions and attributes. These improvements contribute to faster, more accurate classification and risk-based targeting, which strengthens security and facilitates trade. | The Cargo Classification Tool produces outputs that map cargo commodity descriptions to their most probable tariff codes, enhancing classification accuracy. These outputs integrate seamlessly into broader threat-specific risk models, providing features to support predictive risk assessments in cargo security. | 01/10/2020 | b) Developed in-house | Yes | The Cargo Classification Tool produces outputs that map cargo commodity descriptions to their most probable tariff codes, enhancing classification accuracy. These outputs integrate seamlessly into broader threat-specific risk models, providing features to support predictive risk assessments in cargo security. | This model leverages data provided by carriers within the Automated Commercial Environment (ACE), as well as transformations of that data within the Automated Targeting System (ATS). | Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | Yes | https://www.dhs.gov/sites/default/files/publications/privacy_pia_cbp_tsacop_09162014.pdf | |||||||||||||
| Department Of Homeland Security | CISA | DHS-106 | Critical Infrastructure Network Anomaly Detection | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | CyberSentry currently ingests hundreds of terabytes of data from Critical Infrastructure Partners every single day. At petabyte-scale over all collected network data, CyberSentry required a way to filter through the noise to be able to detect Advanced Persistent Threat (APT) and Nation State malicious activity happening within our Partners' networks. CyberSentry has developed numerous machine learning-based detections to identify trends, patterns, and anomalies in network data that ultimately result in both automated and manual triage by analysts. | This use case delivers improved internal government tools for hunting and detection of malicious threat actors on critical infrastructure networks. It automates manual data fusion and correlation processes and highlights potential anomalies, allowing CISA analysts to focus more time on hunting adversaries. | An interface is provided for analysts to query cybersecurity data, and dashboards are provided with potential cybersecurity alerts, including anomalies detected through predictive models and rule-based heuristics. | 10/01/2022 | b) Developed in-house | Yes | An interface is provided for analysts to query cybersecurity data, and dashboards are provided with potential cybersecurity alerts, including anomalies detected through predictive models and rule-based heuristics. | Cybersecurity cloud, network and host logs; Cybersecurity threat intelligence (CTI) | No | https://www.dhs.gov/publication/dhscisapia-037-cybersentry | Yes | https://www.dhs.gov/publication/dhscisapia-037-cybersentry | |||||||||||||
| Department Of Homeland Security | CISA | DHS-2306 | CISAChat | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Currently, retrieving and synthesizing content from hundreds of government documents is a slow, manual process. CISA users need the capability to answer questions, generate summaries, and produce textual responses efficiently using a broad range of information sources—from CISA publications to DHS mandates. AI can streamline and accelerate this workflow by automating information extraction and response generation. | Currently, multiple CISA program offices are using contractor staff to review pre-production content and other internal materials to develop summaries, key themes, and improve clarity. Leveraging a Generative AI solution improves internal agency Customer Experience (CX) and saves staff time. | LLM generated response to the questions posed on uploaded content. | 06/05/2025 | a) Purchased from a vendor | Microsoft | No | LLM generated response to the questions posed on uploaded content. | Current data used is pre-publication content that has already been approved. | No | Yes | ||||||||||||||
| Department Of Homeland Security | CISA | DHS-4 | Automated Detection of Personally Identifiable Information (PII) in Cybersecurity Data | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | To enhance privacy, this AI tool uses Natural Language Processing (NLP) to automatically flag potential PII for review and removal by CISA analysts. | Automated PII Detection and Review Process uses analytics to identify and manage potential PII in submissions. If PII is flagged, the submission is sent to CISA analysts, who are guided by AI to review and confirm or reject the detection, redacting information if necessary. Privacy experts monitor the system and provide feedback. The system learns from this feedback, ensuring compliance with privacy regulations and improving efficiency by reducing false positives. Regular audits ensure the process remains trustworthy and effective. | The system sends flagged data rows with potential PII to humans for review. | 01/12/2020 | c) Developed with both contracting and in-house resources | Nightwing Intelligence Solutions, LLC. Procurement Instrument ID affiliated with this use case: 70QS0124C00000002 | Yes | The system sends flagged data rows with potential PII to humans for review. | Cybersecurity indicators of compromise (IOCs), Cybersecurity threat intelligence (CTI) | Yes | https://www.dhs.gov/publication/dhsnppdpia-029-automated-indicator-sharing | Yes | https://www.dhs.gov/publication/dhsnppdpia-029-automated-indicator-sharing | ||||||||||||
| Department Of Homeland Security | FEMA | DHS-2296 | OCFO Response Augmentation Suite | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | FEMA Office of the Chief Financial Officer Generative Pre-trained Transformer (OCFO GPT), Travel Policy GPT and Fiscal Policy GPT are internal Generative AI (GenAI) tools designed to support the FEMA workforce by generating initial responses to various queries. These tools leverage relevant public and internal documents to draft preliminary responses, which are then refined prior to formal submission. FEMA OCFO GPT generates initial responses to questions for the record, leveraging public and internal documents, and provides a preliminary response to the Program Office to use in their formal response to the request. It reduces the data gathering stage, saving analysts 80% of the initial effort. Travel Policy GPT generates initial responses to questions regarding FEMA/DHS Travel Policy, including the JTR, and provides a preliminary response to the travel specialist to use in their formal response to the queries. It improves response times, saving users 80-90% of the time compared to regular engagement with the Travel Service Center. Fiscal Policy GPT provides preliminary responses to questions regarding FEMA/DHS Fiscal Policy and will generate a draft response with references to assist FEMA internal workforce in compliance with established policy. It saves users 80-90% of the time compared to regular engagement with DHS /FEMA OCFO policy and speeds up resolution times. | FEMA OCFO GPT-B is an internal facing GenAI tool to augment the FEMA workforce in generating initial responses to questions for the record and providing a preliminary response to the Program Office to use in their formal response to the request. FEMA OCFO GPT provides draft responses reducing the data gathering stage and providing additional time for analysis, response, and approval. This has reduced the analyst initial level of effort versus individual research by 80% on initial surveys. Travel Policy GPT is an internal facing GenAI tool to augment the FEMA workforce in generating initial responses to questions regarding FEMA/DHS Travel Policy, and providing a preliminary response to the travel specialist to use in their formal response to the queries. The tools provide improved responses saving the end user time versus regular engagement with the Travel Service Center. The tool also allows travelers to ask specific questions that require Travel Service Center engagement, limiting the needed triage and speeding resolution times. Fiscal Policy GPT is an internal facing GenAI tool to augment the FEMA workforce in providing preliminary responses to questions regarding FEMA/DHS Fiscal Policy. The tool provides improved responses saving the end user time versus regular engagement with the DHS/FEMA OCFO Policy. The tool also allows internal users to ask specific questions that require Fiscal Policy engagement, limiting the needed triage and speeding resolution times. | FEMA OCFO GPT is an internal facing GenAI tool to augment the FEMA workforce in generating initial responses to questions for the record and providing a preliminary response to the Program Office to use in their formal response to the request. FEMA OCFO GPT leverages public facing and internal deliberative documents to assist in answering questions the Agency receives. The tool generates a draft response that is then refined/updated prior to providing a formal response. Travel Policy GPT is an internal facing GenAI tool to augment the FEMA workforce in generating initial responses to questions regarding FEMA/DHS Travel Policy and providing a preliminary response to the travel specialist to use in their formal response to the queries. The tool generates a draft response that is then refined/updated prior to providing a formal response. Fiscal Policy GPT is an internal facing GenAI tool to augment the FEMA workforce in providing preliminary responses to questions regarding FEMA/DHS Fiscal Policy. Fiscal Policy GPT will leverage the DHS FMPM and FEMA Fiscal Policy documents. The tool is planned to generate a draft response with references to assist FEMA internal workforce in compliance with established policy. These tools are leveraged in the data gathering stage and do not replace any current analysts work or leadership review, as required, prior to submittal via any formal request for information process. | 01/02/2024 | c) Developed with both contracting and in-house resources | Microsoft (OpenAI Azure Commercial Cloud offerings) | Yes | FEMA OCFO GPT is an internal facing GenAI tool to augment the FEMA workforce in generating initial responses to questions for the record and providing a preliminary response to the Program Office to use in their formal response to the request. FEMA OCFO GPT leverages public facing and internal deliberative documents to assist in answering questions the Agency receives. The tool generates a draft response that is then refined/updated prior to providing a formal response. Travel Policy GPT is an internal facing GenAI tool to augment the FEMA workforce in generating initial responses to questions regarding FEMA/DHS Travel Policy and providing a preliminary response to the travel specialist to use in their formal response to the queries. The tool generates a draft response that is then refined/updated prior to providing a formal response. Fiscal Policy GPT is an internal facing GenAI tool to augment the FEMA workforce in providing preliminary responses to questions regarding FEMA/DHS Fiscal Policy. Fiscal Policy GPT will leverage the DHS FMPM and FEMA Fiscal Policy documents. The tool is planned to generate a draft response with references to assist FEMA internal workforce in compliance with established policy. These tools are leveraged in the data gathering stage and do not replace any current analysts work or leadership review, as required, prior to submittal via any formal request for information process. | Budget Exhibits, Passback Materials, Hearing Testimony, Questions Received, Answers Provided Travel Policy Documents (Joint Travel Regulation (JTR), DHS Travel Policy, FEMA Travel Policy) Fiscal Policy Documents - Treasury Financial Manual (TFM), DHS Financial Policy Manual, FEMA Fiscal Policies | No | Yes | ||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2441 | OCFO Code Assist GPT | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Code Assist Generative Pre-trained Transformer (GPT) is an internal facing Generative AI (GenAI) tool to augment the FEMA workforce in generating and troubleshooting existing queries in established query languages (e.g., SQL, Java, COBOL). Users enter the language they are querying, and the Code Assist GPT then provides a proposed query based on the elements provided. If the query is unsuccessful, the tool maintains the session, allowing users to prompt for enhancements until expected results are achieved. At the end of the session, all prompts and queries are removed, and no data is stored outside of the active session. The tool provides improves query generation and rapid iteration, saving users 80-90% of the time compared to custom query development. It also supports various computer languages to assist the data analytics community. | Saving users 80-90% of the time compared to custom query development, rapid iteration which includes error resolution to query complex data sets and multiple computer languages (COBOL, SQL, Python, etc) | Leverages Azure Commercial OpenAI within the FEMA system boundary, currently leveraging ChatGPT4o. Code Assist GPT is an internal facing GenaAI tool to augment the FEMA workforce in generating and troubleshooting existing queries in established query languages. The user simply enters the language they are trying to query (i.e. SQL, Java, COBOL, etc) and the Code Assist GPT then provides a proposed query based on the elements provided by the user. If the user experiences an error or the query is not successful, the Code Assist GPT maintains the session as long as the user is still logged in and hasn't restarted the session, the user can continue to prompt the GPT to provide enhancements to the provided query until results are as expected. At the end of the user session, all prompts/queries are removed and no data is stored outside of the active user session. | 11/04/2025 | c) Developed with both contracting and in-house resources | Microsoft (OpenAI part of Azure Commercial Cloud offerings) | Yes | Leverages Azure Commercial OpenAI within the FEMA system boundary, currently leveraging ChatGPT4o. Code Assist GPT is an internal facing GenaAI tool to augment the FEMA workforce in generating and troubleshooting existing queries in established query languages. The user simply enters the language they are trying to query (i.e. SQL, Java, COBOL, etc) and the Code Assist GPT then provides a proposed query based on the elements provided by the user. If the user experiences an error or the query is not successful, the Code Assist GPT maintains the session as long as the user is still logged in and hasn't restarted the session, the user can continue to prompt the GPT to provide enhancements to the provided query until results are as expected. At the end of the user session, all prompts/queries are removed and no data is stored outside of the active user session. | Programming languages were used to test and fine tune the model, such as JAVA Script, COBOL, SQL, Python. The tool leverages the ChatGPT4o model with the inherent capability.The tool was validated with extensive User Acceptance Testing validating inputs and outputs against similar queries that already were written and performed manually. This was then validated with a pilot group of users with an expansion to other user groups for use. | No | Yes | ||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2710 | Executive Summary GPT | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | Provides the ability for users to upload documents and provide an executive summary on those documents. | Quickly analyze lengthy or complex documents for relevance to FEMA and/or to provide a high level summary for leadership on potential impacts. | Leverages Azure Commercial OpenAI within the FEMA system boundary, currently leveraging ChatGPT4o. A summary of a document or documents that can be leveraged for high level summaries for leadership or potential impacts. | 16/04/2025 | a) Purchased from a vendor | Microsoft (Azure Commercial OpenAI Cloud Offerings) | Yes | Leverages Azure Commercial OpenAI within the FEMA system boundary, currently leveraging ChatGPT4o. A summary of a document or documents that can be leveraged for high level summaries for leadership or potential impacts. | Sample documents were provided and executive summary was reviewed for relevance/accuracy. | No | Yes | ||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2720 | Public Assistance Workload Projections | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Emergency Management | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The use case is predicting recovery program quantities of interest using supervised learning models to include predicting the number of applicants who will apply for Public Assistance, predicting the number of PA projects that applicants will submit, predicting the number of sites that will need to be inspected per PA project, predicting the cost of delivering assistance, etc. Supervised learning models include but are not limited to the use of sample statistics, generalized linear models, decision trees, and deep neural networks for the purpose of predicting unknown quantities. | 1. For informational purposes: the models will produce predictions for to-be-determined quantities of interest. These quantities are often of interest to Agency personnel in the field, region, and headquarters, as well as DHS, OMB, NSC, and the White House._x000D_ 2. For decisional purposes: in addition to being informative, the model’s predictions are likely to be used for decision making. Projections help inform staffing levels and timing. | Supervised learning models produce predictions, not recommendations and not decisions (though they can be used to inform human users in making recommendations and decisions)._x000D_ _x000D_ A minimum, these supervised learning models will produce point predictions for the different quantities of interest for disaster declarations. Additionally supervised learning models may produce prediction intervals or predictive distributions as feasible and appropriate for the given prediction problem. Often these outputs will be shared via business intelligence tools (e.g., Tableau or PowerBI) for wide internal FEMA use. Some predictions may be shared to a more restricted audience through simpler means (e.g., an excel workbook) as appropriate. | 01/10/2018 | b) Developed in-house | No | Supervised learning models produce predictions, not recommendations and not decisions (though they can be used to inform human users in making recommendations and decisions)._x000D_ _x000D_ A minimum, these supervised learning models will produce point predictions for the different quantities of interest for disaster declarations. Additionally supervised learning models may produce prediction intervals or predictive distributions as feasible and appropriate for the given prediction problem. Often these outputs will be shared via business intelligence tools (e.g., Tableau or PowerBI) for wide internal FEMA use. Some predictions may be shared to a more restricted audience through simpler means (e.g., an excel workbook) as appropriate. | FEMA: Historical Declaration and Public Assistance activity data_x000D_ U.S. Census: Housing Units Logged, Density Housing Units, Number City/Township Govs, Number of Special District Govs_x000D_ DHS Infrastructure: Fire Stations, Electric Substations, Dams, Ten Mile Power Lines_x000D_ Dept of Agriculture: Agricultural Land (sq. miles), Wetland (sq. miles), Developed Land (sq. miles) | No | Yes | |||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2722 | Individual Assistance (IA) Predictive Models for Program Quantities | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Emergency Management | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The use case is predicting recovery program quantities of interest using supervised learning models to include predicting the number of applicants who will apply for Individual Assistance, how many inspections will be issued, and how many units are required for direct housing. Supervised learning models include but are not limited to the use of sample statistics, generalized linear models, decision trees, and deep neural networks for the purpose of predicting unknown quantities. | The models are intended to quickly quantify and reduce uncertainty around key quantities of interest to enable better programmatic decision making, such as workload management, pre-placement of staff, etc. | The outputs are the predicted values for the quantities of interest, e.g. number of survivors who will register for assistance, number of inspections issued, etc. | 01/02/2019 | b) Developed in-house | No | The outputs are the predicted values for the quantities of interest, e.g. number of survivors who will register for assistance, number of inspections issued, etc. | Historical data obtained from the National Emergency Management Information System (NEMIS); _x000D_ Decennial Census and American Community Survey household data | No | Yes | |||||||||||||||
| Department Of Homeland Security | ICE | DHS-2424 | AI Assisted Compromise Email Detector (AACED) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This use case intends to solve the problem of the extensive manual effort required to review emails for signs of cyber compromise. | The use case was developed to assist ICE SOC in reviewing a collection of emails between ICE personnel and Microsoft that were part of Emergency Directive 24-02. The use case provides a faster mechanism to the SOC analysts to determine indicators of compromise, reducing the level of effort for these individuals’ analysis exponentially. To assist the analysts, Named Entity Recognition (NER) was used to detect PII and other associated keywords to increase analyst productivity, and reduce time required to analyze emails. | Outputs are named entities and generated text for specific questions. Chat interface for analyst to conduct Q&A with email as context. | 01/06/2024 | b) Developed in-house | No | Outputs are named entities and generated text for specific questions. Chat interface for analyst to conduct Q&A with email as context. | Stored Agency emails used for validation. | Yes | Yes | |||||||||||||||
| Department Of Homeland Security | ICE | DHS-2425 | Intelligent Document Processing | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Computer Vision | This use case intends to solve the problem of the manual effort required to validate and extract data from forms. | Business units within ICE leverage these services to automate repeatable, time-consuming processes such as invoice processing, and form entry validation and extraction. This platform will provide Optical Character Recognition and machine learning models to verify, extract, and classify information from ICE forms. Using AI to provide information extraction for these processes saves ICE personnel a significant amount of time while improving data quality and enabling automation. | This platform will provide Optical Character Recognition and machine learning models to verify, extract, and classify information from ICE forms. | 01/06/2019 | c) Developed with both contracting and in-house resources | UiPath; Microsoft; Apryse; Personable Inc. | Yes | This platform will provide Optical Character Recognition and machine learning models to verify, extract, and classify information from ICE forms. | ICE’s document understanding platforms provide out-of-the box models to verify and/or extract information from a variety of document types. These platforms also include the ability to create an ML feedback loop to tailor the models to try to improve the accuracy in extracting data fields from multiple document types. In this scenario, ICE would fine-tune the document understanding model using the documents submitted to the understanding workflow. | Yes | Yes | ||||||||||||||
| Department Of Homeland Security | ICE | DHS-2426 | Cybersecurity Threat Management, Detection, and Response | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This use case intends to solve the problem of detecting and responding to cybersecurity threats in a timely manner. | These AI capabilities provide security analysts with modern tools to identify and respond to threats much more quickly than previously possible, minimizing potential damage to systems and data. | CD&I uses several AI-enabled cybersecurity tools to analyze this data. Machine learning (ML) models, such as classification and regression models, are used to analyze historical data and detect emerging threats through pattern recognition. Other ML capabilities include continuous monitoring of ICE cybersecurity data and the algorithmic identification of real-time cyber threats. This includes recognizing phishing patterns, malware signatures, or abnormal network traffic patterns across a variety of tools. Additionally, CD&I is in the process of integrating its open-source intelligence cybersecurity threat analysis platform with an LLM. This integration will allow platform users to summarize open-source intelligence on cybersecurity threats and more easily research and respond to potential cybersecurity events. | 16/03/2021 | a) Purchased from a vendor | Illumio; AttackIQ; Cofense; Splunk; Crowdstrike; Polarity | Yes | CD&I uses several AI-enabled cybersecurity tools to analyze this data. Machine learning (ML) models, such as classification and regression models, are used to analyze historical data and detect emerging threats through pattern recognition. Other ML capabilities include continuous monitoring of ICE cybersecurity data and the algorithmic identification of real-time cyber threats. This includes recognizing phishing patterns, malware signatures, or abnormal network traffic patterns across a variety of tools. Additionally, CD&I is in the process of integrating its open-source intelligence cybersecurity threat analysis platform with an LLM. This integration will allow platform users to summarize open-source intelligence on cybersecurity threats and more easily research and respond to potential cybersecurity events. | The cybersecurity solutions use commercially available large language models that have been trained on the public domain by their providers. There was no additional training using agency data on top of what is available in the models’ base set of capabilities. During operation, the AI models interact with ICE production data from multiple sources, including data from Microsoft Defender for Office (Polarity) and suspected malicious reported emails from ICE personnel (Triage). | Yes | No | ||||||||||||||
| Department Of Homeland Security | ICE | DHS-2515 | AI-Enhanced ICE Tip Processing | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Generative AI | This use case intends to solve the problem of the time-consuming manual effort required to review and categorize incoming tips. | The use of AI in this process enables the Tip Line team to more quickly identify and action tips recommended for urgent case categories. Additionally, the introduction of a BLUF field saves time by providing analysts with a high-level understanding of a tip before they review its details. | This solution uses a large language model (LLM) to enrich web tips with two additional data elements: (1) a high-level summary of the tip (BLUF), and (2) a recommended case category. The LLM generates BLUFs in English, regardless of the language used in the raw tip submission. For non-English tips, analysts may click a button to translate the full tip violation summary data element into English. The LLM is configured to only recommend case categories from a list of predefined HSI case categories. | 02/05/2025 | a) Purchased from a vendor | Palantir | Yes | This solution uses a large language model (LLM) to enrich web tips with two additional data elements: (1) a high-level summary of the tip (BLUF), and (2) a recommended case category. The LLM generates BLUFs in English, regardless of the language used in the raw tip submission. For non-English tips, analysts may click a button to translate the full tip violation summary data element into English. The LLM is configured to only recommend case categories from a list of predefined HSI case categories. | The system uses commercially available large language models trained on the public domain data by their providers. There was no additional training using agency data on top of what is available in the models’ base set of capabilities. During operation, the AI models interact with tip submissions. | Yes | Yes | ||||||||||||||
| Department Of Homeland Security | ICE | DHS-2553 | Media Classifier for Computer and Digital Storage Evidence | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Computer Vision | The AI is intended to solve the problem of analysts having to manually sort and review large volumes of media files from digital storage devices, which makes it difficult and time‑consuming to organize evidence and identify potentially relevant material. | By automating the initial classification of large volumes of media evidence, the platform enables Homeland Security Investigations personnel to more efficiently identify and review potentially relevant information, improving the overall effectiveness of digital investigations. | The platform incorporates a machine learning model that classifies media evidence from lawfully obtained computer and digital storage devices and suggests category tags based on user-selected categories, such as cars, drugs, or weapons. | 29/08/2023 | a) Purchased from a vendor | Cellebrite | No | The platform incorporates a machine learning model that classifies media evidence from lawfully obtained computer and digital storage devices and suggests category tags based on user-selected categories, such as cars, drugs, or weapons. | The vendor did not provide information on the data sets used to train its models. However, HSI does not provide the vendor with any agency data to train, fine-tune, or evaluate performance of the model(s) used in this use case. | Yes | No | ||||||||||||||
| Department Of Homeland Security | ICE | DHS-2758 | AI-Powered Developer Tools | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the problem of time‑consuming manual developer tasks, including debugging code, writing database queries, and analyzing system metrics, which slows application development and makes it harder to quickly identify and fix issues. | By streamlining routine coding tasks and surfacing useful system insights, AI-enabled developer tools increase developer productivity and support faster, higher-quality delivery of software across ICE. | The outputs of these AI-enabled tools include suggested code snippets, refactoring recommendations, optimized queries, and analytics summaries related to system behavior or performance. These outputs are presented to developers as proposed changes or insights, which must be reviewed and approved before being incorporated into the codebase through existing version control and deployment processes. The tools do not directly modify production systems; all changes must go through standard human review, testing, and approval workflows. | 15/04/2025 | a) Purchased from a vendor | Palantir | Yes | The outputs of these AI-enabled tools include suggested code snippets, refactoring recommendations, optimized queries, and analytics summaries related to system behavior or performance. These outputs are presented to developers as proposed changes or insights, which must be reviewed and approved before being incorporated into the codebase through existing version control and deployment processes. The tools do not directly modify production systems; all changes must go through standard human review, testing, and approval workflows. | The system uses commercially available large language models that have been trained on the public domain by their providers. There was no additional training using agency data on top of what is available in the models’ base set of capabilities. | Yes | No | ||||||||||||||
| Department Of Homeland Security | ICE | DHS-2759 | Open-Source Intelligence for Investigations | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to solve the problem of analysts having to manually search and make sense of vast amounts of multilingual, multimodal publicly available online data, which makes it difficult to efficiently identify relevant identifiers, high‑risk content, and patterns needed to support Homeland Security Investigations (HSI) investigations. | These AI tools significantly reduce the time and effort required to sift through large datasets, improve the ability to uncover relevant information, and enhance the overall efficiency and effectiveness of HSI’s investigative operations. | The outputs of these platforms include flagged risk alerts, extracted identifiers, image and sentiment classification, and suggested investigative leads. These platforms do not perform biometric identification, facial recognition for identity verification, autonomous targeting, or automated enforcement actions. All AI-enabled outputs are subject to mandatory human-in-the-loop review prior to any investigative, operational, or enforcement action. | 01/09/2023 | a) Purchased from a vendor | Penlink, Fivecast | No | The outputs of these platforms include flagged risk alerts, extracted identifiers, image and sentiment classification, and suggested investigative leads. These platforms do not perform biometric identification, facial recognition for identity verification, autonomous targeting, or automated enforcement actions. All AI-enabled outputs are subject to mandatory human-in-the-loop review prior to any investigative, operational, or enforcement action. | The AI platforms used for open-source intelligence investigations rely on pre-trained large language models, natural language processing models, and other third-party AI services. These models are trained on publicly available and commercially licensed data. No DHS or agency data is used to train, fine-tune, or develop the AI models. | Yes | https://www.dhs.gov/sites/default/files/2024-11/24_1126_priv_pia_ice064_socialmedia.pdf | Race/Ethnicity, Sex/Gender, Age | No | https://www.dhs.gov/sites/default/files/2024-11/24_1126_priv_pia_ice064_socialmedia.pdf | |||||||||||
| Department Of Homeland Security | ICE | DHS-P1 | Normalization Services | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This use case intends to solve the problem of data redundancies, inconsistencies difficult to integrate data that hinders the efficiency and accuracy of investigations. | HSI utilizes artificial intelligence to enhance data accuracy and efficiency by verifying, validating, correcting, and normalizing various types of information, including addresses, phone numbers, names, and ID numbers. This process helps to eliminate data entry errors, detect intentional misidentification, and connect related information across multiple datasets, ultimately reducing the time and resources required for investigations. The machine learning-powered normalization services offered by HSI include converting ambiguous addresses into usable formats, identifying ID types from partial information, categorizing names with complex suffixes and family names, and standardizing phone numbers to the E164 format, including determining their originating county. By normalizing and improving the quality of investigative datasets, HSI is able to use more advanced tools to find correlations and leads that would have otherwise gone undetected without extensive manual effort. | The output includes normalized data that improves search capability during investigations. This includes normalizing data to update less well-defined addresses into usable addresses for analysis- (such as those using mile markers instead of a street number); inferring ID type based on user-provided ID value (such as distinguishing a SSN from a DL number without additional context); categorizing name parts while taking into account additional factors (including generational suffixes and multi-part family names); and validating and normalizing phone numbers to the E164 standard, including their identified county of origin. | 01/04/2021 | c) Developed with both contracting and in-house resources | Booz Allen | Yes | The output includes normalized data that improves search capability during investigations. This includes normalizing data to update less well-defined addresses into usable addresses for analysis- (such as those using mile markers instead of a street number); inferring ID type based on user-provided ID value (such as distinguishing a SSN from a DL number without additional context); categorizing name parts while taking into account additional factors (including generational suffixes and multi-part family names); and validating and normalizing phone numbers to the E164 standard, including their identified county of origin. | Data holdings within HSI case files that require normalization (e.g. subpoenaed phone record). This includes, but is not limited to, evidentiary records containing phone numbers, names, addresses, and ID numbers. | Yes | https://www.dhs.gov/sites/default/files/2025-06/25_0618_priv_pia-ice-055-raven-appendix-update.pdf | Yes | https://www.dhs.gov/sites/default/files/2025-06/25_0618_priv_pia-ice-055-raven-appendix-update.pdf | ||||||||||||
| Department Of Homeland Security | MGMT | DHS-2433 | DHS-Chat | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | DHS personnel need a reliable solution to quickly access accurate information, process documents, and support work tasks within the DHS Workspace. Existing workflows are often time-consuming and inefficient, and there is a need to streamline operations while maintaining compliance with security requirements. | This is a chatbot based on a Large Language Models (LLM) for internal DHS employee use - It's like chat-GPT for DHS but approved for use with non-classified but internal information (this includes FOUO (For Official Use Only) CUI (Controlled Unclassified Information) due to its improved security compared to publicly available chatbots. This tool is able to dynamically create written content through text prompts submitted by the user. Approved applications of this tool to DHS business include generating first drafts of documents that a human would subsequently review, conducting and synthesizing research on open-source information and internal documents, and developing briefing materials or preparing for meetings and events. | The internally available generative AI tool outputs text based on the users input. | 12/12/2024 | b) Developed in-house | Yes | The internally available generative AI tool outputs text based on the users input. | No agency-owned data was used to train, fine-tune, or evaluate the model. The model was trained on publicly available datasets and general knowledge up to the specified cutoff date. No DHS-specific or agency-owned data was incorporated during model development. | No | Yes | |||||||||||||||
| Department Of Homeland Security | MGMT | DHS-2453 | ESEC Inquiry (STORM) Summarization | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | This use case is intended to decrease the time it takes to enter incoming requests into Storm. By summarizing the request, analysts are able to more quickly take action and assign to relevant parties. It will also lead to future capabilities to draft responses to those requests. | ESEC-STORM AI employs Generative Artificial Intelligence (Gen-AI) technology to automate the creation of document summaries. This advanced system facilitates the automatic integration of these summaries into the System of Tracking, Operations, and Record Management (STORM), thereby optimizing the management of correspondence and information requests. | When a user creates a new work package and uploads a letter, it activates a Power Apps workflow that creates a summary. | 13/12/2024 | c) Developed with both contracting and in-house resources | Microsoft | Yes | When a user creates a new work package and uploads a letter, it activates a Power Apps workflow that creates a summary. | The commercial models used for this use case were trained using a diverse range of publicly available data, including text from books, articles, websites, and other sources and data types. | No | Yes | ||||||||||||||
| Department Of Homeland Security | MGMT | DHS-419 | AdaptiveMFA | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Enhances security and the user experience, including: Behavioral Anomaly Detection - AI monitors user activity over time to establish a baseline pattern of their behavior. If a login attempt deviates significantly, such as logging in from a foreign country, unknown workstation, or disallowed IP address space, Okta can trigger Adaptive Multi-factor Authentication (aMFA) to block access. Adaptive MFA - Okta's AI-powered Adaptive MFA tailors security challenges based on risk level. Trusted users might only need a password, while high-risk users are prompted for biometric or application-based verification. Real-Time Threat Detection - Okta's integration with AI-driven threat intelligence platforms like CrowdStrike and Microsoft Defender enhance real-time visibility into threats, correlating data from endpoint, network, and identity layers. Access Governance with Intelligence - AI enables smarter access reviews and role recommendations. It detects unusual access rights, flags overprovisioned users, and automatically suggests changes. AI is integrated into DHS's identity and access management solutions to strengthen security and enhance the user experience. | Adaptive Multi-factor Authentication (aMFA) introduces additional intelligence into Identity flows by taking into account the authentication context data during the authentication. Using the data, DHS is able to adapt security and authentication policies to enhance the security to DHS systems. | The input includes: device, network, location, travel, IP, and external data from third parties and endpoint security integrations. The output are Risk ratings HIGH, MEDIUM, LOW for each authentication attempt which can be configigured to require stricted access comtrol policies. | 30/03/2025 | c) Developed with both contracting and in-house resources | Okta | Yes | The input includes: device, network, location, travel, IP, and external data from third parties and endpoint security integrations. The output are Risk ratings HIGH, MEDIUM, LOW for each authentication attempt which can be configigured to require stricted access comtrol policies. | The system collects signal data during authentication to dynamically build behaviors of the user. | No | No | ||||||||||||||
| Department Of Homeland Security | MGMT | DHS-45 | Text Analytics for Survey Responses (TASR) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Quickly and accurately pulling significant topics and themes from unstructured text responses to DHS internal surveys. | The intended purpose of the AI is to perform topic modeling, sentiment analysis, or other text classification tasks on responses provided to internal staff DHS Pulse Survey questions. Text Analytics for Survey Responses (TASR) is an application for performing Natural Language Processing (NLP) and text analytics on survey responses. It is currently being applied by DHS Office of the Chief Human Capital Officer (OCHCO) to analyze and extract significant topics/themes from unstructured text responses to open-ended questions in the quarterly DHS Pulse Surveys. Results of extracted topics/themes are provided to DHS Leadership to better inform agency-wide efforts to meet employees’ basic needs and improve job satisfaction | The systems outputs include a set of topics inferred or surfaced from the raw text comment data, as well as sentiments or other classifications inferred from the data. | 01/11/2022 | b) Developed in-house | No | The systems outputs include a set of topics inferred or surfaced from the raw text comment data, as well as sentiments or other classifications inferred from the data. | Pulse survey data. | No | Yes | https://github.com/dhs-gov/tasr_lda | ||||||||||||||
| Department Of Homeland Security | TSA | DHS-2432 | Airport Throughput Predictive Model | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | This use case is a predictive model for passenger volume to help with airport staffing. | This project was to create a predictive model for the passenger volume using the Security Operations throughput count from checkpoints to help with airport staffing. | Once a month the data is ingested, the predictive model is trained, and predictions of airport checkpoint throughput are made for the airports. | 01/04/2024 | b) Developed in-house | Yes | Once a month the data is ingested, the predictive model is trained, and predictions of airport checkpoint throughput are made for the airports. | Secure Flight Passenger Data: passenger and airline reservation information received from airlines; PMIS Data: Secure checkpoint throughput counts by airport and checkpoint. | No | Yes | |||||||||||||||
| Department Of Homeland Security | TSA | DHS-2518 | Geographic Current Events Real-Time Alerting Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | TSA is able to analyze and process large volumes of data to evaluate critical events faster and more effectively. | To provide real-time, actionable information by leveraging advanced artificial intelligence (AI) and machine learning algorithms to aggregate and summarize large amounts of publicly available data from social media, news, and other sources. The benefit of this product is that it produces real-time alerts based on a geographic location to include, predictive insights, detection of early signals of significant events, trends, or crises. | Analysts set up geographic boundaries for reporting alerts and make selections of topical areas for interest and the tool delivers information that meet the conditions set by the analyst. | 01/01/2025 | a) Purchased from a vendor | First Alert/Dataminr | No | Analysts set up geographic boundaries for reporting alerts and make selections of topical areas for interest and the tool delivers information that meet the conditions set by the analyst. | Open-source data from the web, and social media, in addition to geographical location data. | No | No | ||||||||||||||
| Department Of Homeland Security | TSA | DHS-2522 | idiCORE Subscriptions | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This allows agents and analysts to quickly determine interrelationships and connections, facilitating faster and more accurate investigations. | Significantly enhances efficiency by saving analysts time in forming relative and associate interrelationships. It eliminates much of the guesswork, enabling analysts to focus on critical decisions and actions. | Analysts set up geographic boundaries for reporting alerts and make selections of topical areas for interest and the tool delivers information that meet the conditions set by the analyst. | 17/07/2025 | a) Purchased from a vendor | idiCORE | No | Analysts set up geographic boundaries for reporting alerts and make selections of topical areas for interest and the tool delivers information that meet the conditions set by the analyst. | idiCORE uses public records data, including government data, property records, business filings, and public social media information, in addition to proprietary/licensed data, such as specialized databases for insurance claims, law enforcement intel, and consumer data to train its models and perform its data-linking functions. | No | No | ||||||||||||||
| Department Of Homeland Security | TSA | DHS-2604 | AskTSA | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Generative AI | The AI is intended to address several challenges within the AskTSA customer service process. These include reducing the time it takes for agents to respond to inquiries, improving the accuracy and consistency of responses, and streamlining the categorization of incoming inquiries. Additionally, the AI would help identify areas for improvement in the virtual assistant’s performance and provide actionable insights to enhance its effectiveness. By automating repetitive tasks like categorizing inquiries, the AI would allow human agents to focus on more complex issues, ultimately improving efficiency and customer satisfaction. | The AI’s intended purpose is to enhance the efficiency, accuracy, and overall effectiveness of the AskTSA customer service process. It would assist human agents by automating routine tasks such as categorizing inquiries and summarizing customer concerns, enabling faster and more consistent communication with the public. Additionally, the AI would analyze interactions with the virtual assistant to identify areas for improvement and recommend adjustments to ensure it provides accurate and helpful responses. By streamlining workflows and providing actionable insights, the AI would support TSA’s goal of delivering high-quality, timely, and reliable customer service. | Summarized Inquiries: Condensed explanations of why a customer is reaching out; Recommended Responses: Suggested replies tailored to the summarized inquiries; Categorized Inquiries: Labels or classifications of inquiries based on their content to streamline workflow; Performance Reports: Analytical insights on the virtual assistant’s interactions, highlighting areas for improvement. | 16/12/2024 | a) Purchased from a vendor | Sprinklr | No | Summarized Inquiries: Condensed explanations of why a customer is reaching out; Recommended Responses: Suggested replies tailored to the summarized inquiries; Categorized Inquiries: Labels or classifications of inquiries based on their content to streamline workflow; Performance Reports: Analytical insights on the virtual assistant’s interactions, highlighting areas for improvement. | Supervised Learning :Decision trees are trained on labeled data (input and desired output) to learn patterns and make predictions on new, unseen data. | Yes | No | ||||||||||||||
| Department Of Homeland Security | TSA | DHS-2605 | Lexis Nexis | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | c) Not high-impact | Not high-impact | Generative AI | Identity verification for passenger screening and bolster transportation security. | Lexis Nexis assists TSA Analysts in identity verification for passenger screening and bolster transportation security. | The AI outputs are in the form of a person, vehicular, or dwelling report that is custom based on analysts inputs and selections. These reports are used by TSA to assist in the verification of traveler identities detecting fraudulent documents and ensuring compliance with security protocols. | 23/09/2024 | a) Purchased from a vendor | REX DBA Lexis Nexis | No | The AI outputs are in the form of a person, vehicular, or dwelling report that is custom based on analysts inputs and selections. These reports are used by TSA to assist in the verification of traveler identities detecting fraudulent documents and ensuring compliance with security protocols. | AI models are trained using a combination of open web content and proprietary LexisNexis content to ensure high-quality, relevant outputs. Evaluations occur through regular internal and external audits, customer feedback and reviews, and incident response. | No | No | ||||||||||||||
| Department Of Homeland Security | TSA | DHS-2609 | SITE Group Subscription Services | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | This tool assists TSA with identifying known and unknown threats to the U.S. and international aviation and surface transportation systems. | AI-integration within Site group enhances its capabilities in monitoring and analyzing extremist activities online. | The AI outputs are in the form of curated reports based on human verified information within the topics of jihadist threat, domestic violence, extremism, critical infrastructure and technology, and terrorism. AI outputs within SITE provide actionable insights. Based on these outputs, follow-on actions might include flagging potential security threats for further analysis. | 20/09/2025 | a) Purchased from a vendor | SITE | No | The AI outputs are in the form of curated reports based on human verified information within the topics of jihadist threat, domestic violence, extremism, critical infrastructure and technology, and terrorism. AI outputs within SITE provide actionable insights. Based on these outputs, follow-on actions might include flagging potential security threats for further analysis. | SITE is trained on a large, human-verified dataset of archived Publicly Available Information (PAI) collected from internet-based platforms. This data includes messenger applications, social media venues, and websites. All data contained within has been human verified by SITE expert analysts, to avoid incomplete or inaccurate results from data collection methods like webscraping. No agency owned data is used to conduct training or evaluation of this product. | No | No | ||||||||||||||
| Department Of Homeland Security | USCG | DHS-178 | Adaptive Risk Model for Inspected Small Passenger Vessels | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | A lack of a comprehensive data-driven tool for informing marine inspection policy has left policymakers to make decisions based on qualitative and anecdotal information, resulting in less-than-optimal allocation of limited marine inspection resources. | The Small Passenger Vessels Safety Task Force uses machine learning and expert input to build a flexible analysis tool that identifies the main causes of marine casualties and calculates a risk score for each vessel in the largest segment of the U.S.-inspected fleet. By using a logistic regression–based model with basic machine learning, this effort improves how inspectors are allocated, sharpens the focus on higher-risk vessels, and strengthens oversight to improve passenger safety. | Numerical score that compares vessels predicted safety risk relative to each other. | 01/01/2021 | b) Developed in-house | No | Numerical score that compares vessels predicted safety risk relative to each other. | Commercial vessel profiles including: engineering, life saving, propulsion, fire protection, manning, operating routes, plan review, and USCG inspection activity details. | No | Yes | |||||||||||||||
| Department Of Homeland Security | USCIS | DHS-16 | ELIS Evidence Classifier Service | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The adjudicators and contractors spends too much time sifting through digital evidence documents for relevant information. | To enable end users to navigate directly to the page(s) containing evidence documents of interest instead of sifting through large PDF documents. Evidence tagging intends to accelerate case processing by identifying specific types of documents (e.g., I-589, passport photo spread, marriage certificate) and applying a metadata tag to that document object in ELIS. This way, when a user opens a case with potentially hundreds of pages of evidence documents, rather than scrolling through them one at a time to find a specific document of interest, they have clickable "bookmarks" in the UI generated from these tags that will jump directly to the corresponding page. | Tagged evidence. The system inputs an image (scanned document from Lockbox) and outputs either a specific label, such as "Border Crossing Card - Front," or no label if that document is not recognized as one of the classes. | 01/09/2020 | c) Developed with both contracting and in-house resources | SAIC and DV United | Yes | Tagged evidence. The system inputs an image (scanned document from Lockbox) and outputs either a specific label, such as "Border Crossing Card - Front," or no label if that document is not recognized as one of the classes. | The system consists of a single vision-based object recognition model, and many text-based binary classifiers. The text models were trained and evaluated on separate class-specific sets of production data sampled from evidence documents, and each data point is the linearized OCR text obtained from a single scanned page image and AWS Textract. These training and testing sets are then annotated by data scientists on our team. | Yes | https://www.dhs.gov/publication/dhsuscispia-056-uscis-electronic-immigration-system-uscis-elis | Yes | https://www.dhs.gov/publication/dhsuscispia-056-uscis-electronic-immigration-system-uscis-elis | ||||||||||||
| Department Of Homeland Security | USCIS | DHS-2385 | Intelligent Document Processing (IDP) for I-539 Form Digitization | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Computer Vision | Before the use case, all pages of a I-539 application were scanned and stored as a single document in the content management system, delaying adjudication and not meeting National Archives and Records Administration (NARA) standards. The tool uses a learning model to identify, classify, and separate individual documents into their component parts for storage. | IDP for the I-539 makes use of an AI-enhanced tool to identify, categorize, and create separate images for each document type submitted as part of the 539 benefit application. Prior to implementation of this use case, all pages of a 539 application were scanned and stored as a single document in the content management system. The benefit is reduced case processing time for adjudicators by identifying and classifying supporting documents for ease of use. An additional benefit is to bring digital images into compliance with NARA standards. | Input - one digital file comprised of all pages of a 539 benefit application. Output - multiple digital files comprised of the individual documents submitted as part of the 539 benefit application. These will include the 539 form, any other USCIS forms, and image files of other supporting documents such as Passports, Driver's license, Marriage Certificate, Bank Statement, etc. All pages of the original digital file are accounted for and stored. Any pages not identified by the tool are referred to a human for document type resolution. | 04/11/2024 | c) Developed with both contracting and in-house resources | CGI Federal under the Records Management Support Services (RMSS); Hyperscience - IDP software OEM | Yes | Input - one digital file comprised of all pages of a 539 benefit application. Output - multiple digital files comprised of the individual documents submitted as part of the 539 benefit application. These will include the 539 form, any other USCIS forms, and image files of other supporting documents such as Passports, Driver's license, Marriage Certificate, Bank Statement, etc. All pages of the original digital file are accounted for and stored. Any pages not identified by the tool are referred to a human for document type resolution. | CMS/STACKS test data is used to train the model. This data is comprised of digital images of blank USCIS forms and common supporting forms (Marriage License, Driver's License, Passport, ect.) generated using fake information such as Mickey Mouse and Donald Duck in place of PII. | Yes | https://www.dhs.gov/publication/dhsuscispia-079-content-management-services-cms | No | https://www.dhs.gov/publication/dhsuscispia-079-content-management-services-cms | ||||||||||||
| Department Of Homeland Security | USCIS | DHS-2598 | PDF Intake (PDFI) for myUSCIS | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Government Benefits Processing | Deployed | c) Not high-impact | Not high-impact | Generative AI | Scanned PDFs submitted through MyUSCIS must be validated against form-specific business rules related to both the overall document and the contents of specific fields. Constructed a service that can process a scanned input document and return all information pertinent to these validation rules in a consistent structure (JSON) to a user-facing ELIS microservice. The GenAI powered library utilizes Amazon Bedrock – Anthropic Claude 3.7 Sonnet V1 Foundation Model to extract data from PDF forms. The service provides ability to submit forms online through MyUSCIS UI to Lockbox instead of via mail. | Develop a service that can extract relevant fields from a scanned PDF submitted through MyUSCIS and build a JSON as an output to the ELIS microservice. The new service will utilize AWS Bedrock provided foundation model. It is an engineering solution that minimizes development time to add new forms or form revisions with high accuracy. | The output of this AI system is structured information about the validation rules applied to the input form as well as the extracted contents of filled fields on the form, presented in a JSON format readable by both humans and machines that is consistent with existing ELIS databases. | 23/07/2025 | c) Developed with both contracting and in-house resources | Analytica and DV United | Yes | The output of this AI system is structured information about the validation rules applied to the input form as well as the extracted contents of filled fields on the form, presented in a JSON format readable by both humans and machines that is consistent with existing ELIS databases. | During development, this system is evaluated using both manually created synthetic data (i.e. filled PDF forms with annotated contents) and production data (scans of forms submitted previously through Lockbox as scanned TIF files). The underlying pretrained foundation model supplied by Bedrock service is used as-is with no further training or fine-tuning. | Yes | https://www.dhs.gov/publication/dhsuscispia-056-uscis-electronic-immigration-system-uscis-elis | Yes | https://www.dhs.gov/publication/dhsuscispia-056-uscis-electronic-immigration-system-uscis-elis | ||||||||||||
| Department Of Homeland Security | USCIS | DHS-366 | AI Interview Simulator for Officer Training | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Generative AI | By simulating realistic applicant responses to officer questions, the AI Interview Simulator enables the RAIO trainees to refine their interview techniques without requiring any additional resources from experienced officer trainers or peers. | The AI Interview Simulator mimics live interviews to provide an analysis of the type of responses elicited in a mock interview. This training platform accelerates the competency of the interviews by providing a format for the trainees to practice soliciting testimony from applicants through a chat-based user interface. | The AI Interview Simulator generates human-like conversation in a text format, specially trained and tuned for RAIO Officer training. | 08/09/2025 | c) Developed with both contracting and in-house resources | Steampunk, Customer Value Partners LLC (CVP) and Alpha Omega Integration (AOI) | Yes | The AI Interview Simulator generates human-like conversation in a text format, specially trained and tuned for RAIO Officer training. | AI Interview Simulator uses Proprietary/Private but Not Sensitive data, including training materials, internal guidance documents, and policies that are proprietary to USCIS. | No | https://www.dhs.gov/publication/dhsuscispia-027b-refugees-asylum-and-parole-system-and-asylum-pre-screening-system | Yes | https://www.dhs.gov/publication/dhsuscispia-027b-refugees-asylum-and-parole-system-and-asylum-pre-screening-system | ||||||||||||
| Department Of Homeland Security | USSS | DHS-2626 | License Plate Reader | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Computer Vision | To identify license plate information quickly, efficiently, and accurately from low-quality imagery, advanced image processing and machine learning techniques are typically employed. Need to be able to enhance the clarity of the image, correct distortions, and extract relevant details, even in challenging conditions such as low resolution, poor lighting, motion blur, or obstructions. | The purpose of identifying license plate information from low-quality imagery is to enable accurate and efficient vehicle identification for applications such as law enforcement, traffic monitoring, parking management, border security, and access control, ensuring operational efficiency and enhanced security. | Optical Character Recognition (OCR) technology is utilized to detect and interpret the alphanumeric characters on the license plate. The extracted license plate information serves as a decision support tool, aiding investigators by highlighting possible characters and combinations. This allows investigators to efficiently generate and verify leads while complementing their independent analysis. | 07/01/2025 | a) Purchased from a vendor | Amped | No | Optical Character Recognition (OCR) technology is utilized to detect and interpret the alphanumeric characters on the license plate. The extracted license plate information serves as a decision support tool, aiding investigators by highlighting possible characters and combinations. This allows investigators to efficiently generate and verify leads while complementing their independent analysis. | The vendor, trained a dedicated neural network with millions of synthetically generated and distorted license plates for several countries/states. No license plate images were scraped from the web. Experimental validation was performed by the vendor on Italian license plates. Neural Network for Denoising and Reading Degraded License Plates. | No | Yes | ||||||||||||||
| Department Of Homeland Security | ICE | DHS-172 | Video Analysis Tool (VAT) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-197 | Mobile Language Translation Services | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-2409 | ICE Mobile Check-in Application | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-414 | I-765 - USCIS Face Capture Mobile App | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-123 | Voice Analytics for Investigative Data | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | b) Presumed high-impact but determined not high-impact | Not high-impact | The definition states that the "output serves as a principal basis for a decision or action concerning a specific individual or entity..." However, the output of the AI is never the only basis for a decision, and no rights impacting action is taken unless a human is in the loop making a further determatination | ||||||||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-49 | Mobile Device Analytics for Investigative Data | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | b) Presumed high-impact but determined not high-impact | Not high-impact | The definition states that the "output serves as a principal basis for a decision or action concerning a specific individual or entity..." However, the output of the AI is never the only basis for a decision, and no rights impacting action is taken unless a human is in the loop making a further determatination | ||||||||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-57 | Identity Match Option (IMO) Tool for Record Compilation | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | b) Presumed high-impact but determined not high-impact | Not high-impact | The use case compiles records from across a variety of USCIS systems to provide a comprehensive history of a person’s interaction with USCIS. The AI output can be visualized through a report or dashboard to assist with case review ensuring access to useful and accurate records. (SEE DHS CAIO SUPER MEMO FY24) ------ The AI outputs do not serve as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s civil rights, civil liberties or privacy, equal opportunities, or access to or the ability to apply for critical government resources or services. The use case compiles records from across a variety of USCIS systems to provide a comprehensive history of a person’s interaction with USCIS. The output of this can be visualized through a report or dashboard to assist with case review ensuring access to helpful and accurate records. Adjudicators review the outputs of this use case, alongside other information and insights, to process a case and make a final determination. The adjudication process can be conducted without this tool, however, doing so would significantly increase the time and effort required to process immigration requests. | ||||||||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-165 | Automated Data Annotation | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2364 | Anomaly Detection Homogenous Cargo | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2367 | Computer Vision for Aerial Detection of Land and Open Water Items of Interest | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2369 | AI for Software Delivery | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2371 | Optical Counter - UAS Detection | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2377 | Underwater ROV | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-2378 | Wellness and Physical Fitness Application | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CBP | DHS-P2 | AI for Autonomous Situational Awareness | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CISA | DHS-2403 | Security Operation Center (SOC) Network Anomaly Detection | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CISA | DHS-5 | Confidence Scoring for Cybersecurity Threat Indicators | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | CWMD | DHS-406 | Report Analysis and Archive System (RAAS) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | DHS | DHS-368 | Commercial Generative AI for Text Generation (AI Chatbot) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | DHS | DHS-369 | Commercial Generative AI for Image Generation | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | DHS | DHS-373 | Commercial Generative AI for Code Generation | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2440 | Recovery and Resilience Resource (RRR) Portal | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-2442 | Digital Processing Procedure Manual (D-PPM) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-248 | Incident Management Workforce Deployment Model (depmod) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-251 | Individual Assistance (IA) & Public Assistance (PA) Projections | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-254 | Planning Assistant for Resilient Communities (PARC) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | FEMA | DHS-346 | Geospatial Damage Assessments | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-53 | Identification Card and Travel Document Code Detection | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | ICE | DHS-9 | Machine Translation (Previously Language Translator) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | OHS | DHS-2420 | MiX MedINT (Medical Intelligence Dashboard and Canvas) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | TSA | DHS-2395 | Conversation Training and Feedback Simulator | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-17 | Case Processing Improvements in FDNS-DS NexGen | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Homeland Security | USCIS | DHS-2544 | OCR for Scanning and Cataloging Documentation [Captiva Open Text] | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0001 | Adobe Suite Applications | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in Adobe Suite products, is to support the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) in enhancing the quality and efficiency of multimedia content handling within their operations. The out-of-the-box artificial intelligence features included in these Adobe applications assist in tasks like image enhancement, optical character recognition (OCR), content-aware editing, and document formatting. These AI capabilities enable ATF to prepare high-quality documents and multimedia files for various purposes By automating and improving these tasks, the AI tools help reduce manual workload, minimize errors, and increase operational efficiency. Overall, the use of AI in Adobe Suite products is designed to improve the effectiveness, accuracy, and professionalism of ATF's documentation and multimedia processing. | The expected benefits of utilizing the AI features in Adobe Suite Products are broad and extend beyond FOIA-related activities. The AI capabilities in applications like Photoshop and Acrobat enhance productivity by automating routine tasks such as image editing, optical character recognition (OCR), and content-aware fill. This automation allows staff to process documents and multimedia content more quickly and accurately, leading to time savings and reduced operational costs. The improved efficiency helps in various functions, from preparing official documents to creating high-quality visual materials for communication purposes. Additionally, the AI tools contribute to better quality outputs, which can enhance public engagement and trust. Overall, the AI features in Adobe Suite Products support increased productivity, cost savings, and higher-quality work across a range of organizational activities. | The AI features within Adobe Suite products output automated enhancements and suggestions to improve the efficiency and quality of multimedia content handling. In applications like Photoshop and Acrobat, the AI provides functionalities such as image enhancement, optical character recognition (OCR), content-aware editing, and automated formatting. These outputs assist users by automating routine tasks and enhancing the quality of the final product. The AI serves as a tool to support staff in their work, but all actions are initiated and finalized by human users, ensuring that control remains with the individual operator. | a) Purchased from a vendor | Adobe | No | The AI features within Adobe Suite products output automated enhancements and suggestions to improve the efficiency and quality of multimedia content handling. In applications like Photoshop and Acrobat, the AI provides functionalities such as image enhancement, optical character recognition (OCR), content-aware editing, and automated formatting. These outputs assist users by automating routine tasks and enhancing the quality of the final product. The AI serves as a tool to support staff in their work, but all actions are initiated and finalized by human users, ensuring that control remains with the individual operator. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0002 | Airlines Travel Intelligence Program | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The purpose of the AI use case in the Airlines Reporting Corporation Travel Intelligence Program is to examine travel data and highlight atypical routes or passenger movements more swiftly. Leveraging AI for pattern recognition and predictive insights reduces manual data inspection, speeds the identification of significant travel anomalies, and enables more prompt, well-informed decision-making. Overall, it refines resource deployment and fosters more accurate, efficient management of travel-related intelligence. | The expected benefit is reduced effort identifying unusual travel scenarios, focusing on significant itineraries/individuals. | The AI features generate alerts, highlight uncommon travel paths/profiles, suggesting beneficial attention areas. | a) Purchased from a vendor | Airlines Reporting Corporation | No | The AI features generate alerts, highlight uncommon travel paths/profiles, suggesting beneficial attention areas. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0003 | Unmanned Aerial Systems (UAS) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Airship is an enterprise video management platform used to manage, secure, store, and analyze video surveillance obtained through criminal investigations. The airship platform includes AI-based capabilities for optical character recognition (OCR) for car license plates, airplane tail numbers, etc. and object detection with customizable near-real time alerts. | AI features help to support agent monitoring of surveillance video streams to ensure rapid notifications when predefined events occur that are pertinent to a criminal investigation. | Bounding boxes for video frames and metadata regarding the detected object characteristics, alphanumeric digitized text from OCR | a) Purchased from a vendor | Airship AI Holdings Inc. | Yes | Bounding boxes for video frames and metadata regarding the detected object characteristics, alphanumeric digitized text from OCR | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0004 | Alation Data Catalog | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | The AI within Alation’s Data Catalog is designed to make data easy to find, understand, and use. It leverages machine learning and natural language processing (NLP) to automatically organize and tag data, creating a “map” of available information from various sources within an organization. The goal is that anyone looking for specific data can use simple, plain-language search terms, and the AI will help them locate the most relevant information quickly. | Enable users to quickly find relevant data across large, complex datasets, making information more accessible for decision-making. Additionally, Alation’s AI automates data organization and governance, helping to keep data accurate, up-to-date, and secure. It also supports compliance and builds trust in data, empowering teams to make reliable, data-driven decisions. | The Alation Data Catalog AI system primarily produces outputs that are recommendations and predictive insights to enhance data discovery, governance, and usability. AI outputs guide users toward more efficient and informed data management. | a) Purchased from a vendor | Leidos | Yes | The Alation Data Catalog AI system primarily produces outputs that are recommendations and predictive insights to enhance data discovery, governance, and usability. AI outputs guide users toward more efficient and informed data management. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0005 | Axon FUSUS (via HIDTA partners) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | This capability is not provided or managed by ATF. It provides ATF staff access to Axon FUSUS systems which are provided and managed by High Intensity Drug Trafficking Area (HIDTA) partner agencies. FUSUS integrates video and other data from public safety systems to support law enforcement investigations and increase situational awareness for real-time operations. This includes AI-enabled analysis of participating video streams, with configurable real-time notifications to law enforcement. | Increased situational awareness for real-time law enforcement operations. | Notifications to law enforcement based upon preconfigured alerts from AI-enabled analysis of data streams from participating security devices | a) Purchased from a vendor | Axon | No | Notifications to law enforcement based upon preconfigured alerts from AI-enabled analysis of data streams from participating security devices | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0006 | Azure Data Factory | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | ATF recently enabled Azure Data Factory subscription in Azure, which included smart and intelligent features like anomaly detection, data quality analysis, data completeness, and prediction. While we do not use these features directly, they are embedded within the tools. | The embedded data analytic and data quality capabilities will increase the efficiency and effectiveness with which ATF is able to locate and analyze our data, and the quality and reliability of the resulting data. | Data extraction transformation/load pipelines used to integrate data from disparate datasets | c) Developed with both contracting and in-house resources | Microsoft | Yes | Data extraction transformation/load pipelines used to integrate data from disparate datasets | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0007 | Azure Zen 2 Storage | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | ATF recently enabled Storage subscription in Azure, which included smart features automatic data indexing. While we do not use these features directly, they are embedded within the tools, enabling automatic data indexing based on the data fed into the system. | Automatic data indexing increases the efficiency and effectiveness with which ATF is able to locate and analyze our data. | A data index which aides in the retrieval and analysis of ATF data. | c) Developed with both contracting and in-house resources | Microsoft | Yes | A data index which aides in the retrieval and analysis of ATF data. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0008 | Bloomberg Government | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in Bloomberg Government (BGOV) is to enhance policy and regulatory analysis by automating data aggregation, pattern detection, and trend forecasting. Using AI, the system refines vast datasets into actionable insights, helping users quickly understand emerging issues and legislative shifts. | More informed strategic planning, better resource allocation, and improved overall comprehension of complex policy environments. | The AI features compile briefs, detect regulatory patterns, and compare data points into a coherent narrative. | a) Purchased from a vendor | Bloomberg Industry Group | No | The AI features compile briefs, detect regulatory patterns, and compare data points into a coherent narrative. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0009 | CargoNet | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in CargoNet is to detect and interpret patterns within logistics and theft incident data automating the process of identifying unusual activities or recurring risks. | The expected benefit is focusing efforts on deviating areas/goods/patterns to improve effectiveness of subsequent steps. | The AI features produce alerts, reveal clusters, and connect events to show underlying trends. | a) Purchased from a vendor | Verisk Analytics | No | The AI features produce alerts, reveal clusters, and connect events to show underlying trends. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0010 | Cell Hawk (via HIDTA partners) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | This capability is not provided or managed by ATF. It provides ATF staff read-only access to Cell Hawk data which is provided and managed by High Intensity Drug Trafficking Area (HIDTA) partner agencies. | Link and trend analysis results provided by this system provide law enforcement personnel with increased insights into subjects' activities in the context of criminal investigations. | Link and trend analysis diagrams showing entities involved in criminal investigations and known cellphone-based communications between them | a) Purchased from a vendor | Leads Online | No | Link and trend analysis diagrams showing entities involved in criminal investigations and known cellphone-based communications between them | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0011 | Coinbase | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0012 | Digital.ai | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Digital.ai is used as ATF's enterprise tool for managing agile software development projects. It is used to capture user stories (requirements) and manage the processes of prioritization, implementation, testing, and close-out of the user stories. None of the AI features available within digital.ai are currently in use, but the use case is being reported because they are available within the product. AI features involve analysis of project status, supporting automated test generation, and automating software releases. ATF uses other non-AI products for these purposes. | ATF is not using any of the available AI features. | ATF has not evaluated the AI features in detail since other non-AI products are currently being used to serve the purposes for which digital.ai uses AI. | ATF has not evaluated the AI features in detail since other non-AI products are currently being used to serve the purposes for which digital.ai uses AI. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0014 | Dun & Bradstreet Business Establishments Data | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in Dun & Bradstreet Business Establishments Data is to analyze business information, detect irregularities, and assess potential risks more efficiently. AI-driven data integration, anomaly detection, and trend analysis automate what would otherwise be intensive manual evaluations. | The expected benefit is more targeted attention on atypical organizations, improving efficiency and reducing randomness. | The AI features generate risk indicators, show relationships, help focus examination resources. | a) Purchased from a vendor | Dun and Bradstreet Holdings Inc. | No | The AI features generate risk indicators, show relationships, help focus examination resources. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0015 | Axon Evidence.com | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | ATF's use of Axon body-worn cameras and the associated evidence.com service are integral to ensuring ATF's compliance with DOJ policy documented in DOJ OIG, Inspector General Manual, Volume III, Chapter 236, Body Worn Camera Program. Body cam videos are transferred to the Axon Evidence.com service, which includes AI-based features for performing recognition of heads in videos for the purpose of redaction. ATF has performed limited testing of these capabilities, but is not operationally using them. | Evidence.com provides AI-based features that identify the presence of heads in videos and can perform automated redaction. This would increase the efficiency of redacting video evidence for release. However, ATF is not operationally using them. | Evidence.com AI capabilities will output bounding boxes around heads that are identified in videos, for input to redaction processes. These AI functions are not being operationally used. | a) Purchased from a vendor | Axon | Yes | Evidence.com AI capabilities will output bounding boxes around heads that are identified in videos, for input to redaction processes. These AI functions are not being operationally used. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0016 | Federal Docket Management System | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | This system allows agencies to receive comments that are submitted electronically in response to rulemaking initiatives. The system has features that categorize comments based on text analytics, allows agencies to make those comments public facing or redact them, and allows agencies to download the comments. | This product permits agencies to review the electronic comments received in response to rulemaking. The features in FDMS allow agencies to bulk post up to 1,000 comments allowing the public to see their comments faster. | It is the place where agencies receive comments that are electronically submitted in response to a rulemaking. It allows agencies to make those comments public facing. | a) Purchased from a vendor | General Services Administration (GSA) | No | It is the place where agencies receive comments that are electronically submitted in response to a rulemaking. It allows agencies to make those comments public facing. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0017 | FINDER | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The purpose of the AI use case in FINDER is to sift through violent crime and firearms trafficking data more efficiently. By employing machine learning to highlight significant patterns, suspects, or trends, it automates tasks that would normally require manual, resource-intensive reviews. This leads to quicker identification of essential insights, improves operational effectiveness, and ensures that effort is directed at the most pertinent leads, enhancing both speed and precision in investigative processes. | The expected benefit is more strategic attention to areas that may reduce negative outcomes or enhance preparedness. | The AI features map trends, highlight recurrent factors, and propose focus points, reducing manual data reviews. | a) Purchased from a vendor | FINDER | No | The AI features map trends, highlight recurrent factors, and propose focus points, reducing manual data reviews. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0018 | First Two | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The purpose of the AI use case in First Two is to link individuals to specific locations and visualize spatial relationships that might be important. By automating map-based data correlation, pattern discovery, and geospatial analysis, the system reduces manual data plotting and interpretation. This results in quicker recognition of significant activity areas, improved allocation of resources to critical locations, and overall enhancement of operational responsiveness and strategic planning. | The expected benefit is more efficient focus on frequently visited places/people, enhancing resource allocation where location matters. | The AI features visualize activity geographically, spotlight frequent places, and suggest beneficial attention areas. | a) Purchased from a vendor | FirstTwo | No | The AI features visualize activity geographically, spotlight frequent places, and suggest beneficial attention areas. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0019 | LexisNexis Accurint | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in LexisNexis Accurint is to merge and analyze an array of public and proprietary records to build a clearer picture of individuals and entities. By automating data integration, filtering, and pattern recognition, it expedites the identification of relevant subjects and connections. This advanced approach reduces manual searching, enhances accuracy in linking data points, and strategically directs focus toward high-value leads, improving overall investigative efficiency and decision-making. | The expected benefit is accelerated ID of important figures/relationships, skipping disjointed dataset searches. | The AI features compile overviews, note aliases, highlight patterns, giving clear reference points for exploration. | a) Purchased from a vendor | LexisNexis | No | The AI features compile overviews, note aliases, highlight patterns, giving clear reference points for exploration. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0020 | LexisNexis Babel Street | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0021 | Mark43 Public Safety Records | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in Mark43 Public Safety Records is to organize, categorize, and correlate extensive law enforcement data from various sources. By employing AI-driven classification, entity resolution, and data linking, it reduces the manual workload needed to extract meaningful insights. This empowers users to identify key relationships and patterns more rapidly, increases data accuracy, and supports more strategic use of investigative resources, ultimately improving both timeliness and quality of public safety operations. | The expected benefit is better informed decisionmaking, allowing focus on data likely to yield meaningful insights. | The AI features categorize documents, flag repeated factors, and highlight patterns hidden without assistance. | a) Purchased from a vendor | Mark43 | No | The AI features categorize documents, flag repeated factors, and highlight patterns hidden without assistance. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0022 | National Insurance Crime Bureau ISO ClaimSearch | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in National Insurance Crime Bureau ISO ClaimSearch is to analyze insurance claim data and highlight suspicious patterns or anomalies more efficiently. By applying AI-driven pattern recognition, risk assessment, and anomaly detection, it streamlines what would be a tedious manual review process. This leads to faster fraud identification, more accurate targeting of problematic claims, and better resource utilization, ultimately strengthening investigative outcomes. | The expected benefit is more effective resource use, targeting claims that differ from typical patterns rather than all equally. | The AI features produce lists of flagged claims, show patterns, and suggest where deeper validation might help. | a) Purchased from a vendor | ISO / Verisk Analytics / National Insurance Crime Bureau | No | The AI features produce lists of flagged claims, show patterns, and suggest where deeper validation might help. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0024 | Operational Planning Analytics Risk Management Solution (OPARMS) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Computer Vision | The Operational Planning Analytics Risk Management Solution (OPARMS) creates op plans within ATF's case management system, eliminating the previous system for generating op plans. ATF anticipates that OPARMS will also provide dashboard overviews of the op plans and risk data for all op plans. | Improve operational planning, data collection, and risk mitigation. Reduce time to fill out op plans. Increase accuracy of data entered into ops plans by populating info from other ATF systems. Reduce approval times by routing and tracking op plans through OPARMS. Increase collection of operations planning and after action reports. Collect and store data to begin developing the analytics for calculating and identifying risk in operations allowing team members to better mitigate those risks. | 1. Risk rating of proposed operations 2. Recommendations for resource allocation based on risk ratings | 1. Risk rating of proposed operations 2. Recommendations for resource allocation based on risk ratings | |||||||||||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0025 | Palantir (via access to external state LE system) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | This use case is being reported because Palantir has AI-based capabilities. However, ATF only accesses Palantir through task force partnerships with the a state law enforcement (LE) partner. The state LE agency uses Palantir as their case management system, and ATF's use is limited to searching for information which the state LE agency chooses to share with law enforcement partners. ATF has no involvement with or any knowledge of use of AI features by the state agency which runs the system. | ATF use of the state LE partner Palantir system is limited to searching for information which the state agency chooses to share with law enforcement partners. ATF has no involvement with, any knowledge of, or any expected benefits from use of AI by the state agency which runs the system. | Unknown. ATF only uses the system to perform standard search functions of information which state agency chooses to share with law enforcement partners. | a) Purchased from a vendor | Palantir | No | Unknown. ATF only uses the system to perform standard search functions of information which state agency chooses to share with law enforcement partners. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0028 | ShotSpotter (via access to external state/local systems) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Rapidly locate gunshots and activity of potential investigative interest to alert relevant law enforcement agencies. | Rapid detection of gunshots, which can help to decrease the time to respond to a violent crime. | Outputs sensor reports of gunshots and activity of investigative interest. | a) Purchased from a vendor | SoundThinking | No | Outputs sensor reports of gunshots and activity of investigative interest. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0029 | Southwest Border Transaction Record Analysis Center (SWBTRAC) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in Southwest Border Transaction Record Analysis Center (SWBTRAC) is to monitor cross-border financial transactions and identify unusual activities more efficiently. By integrating AI-driven anomaly detection, trend analysis, and risk scoring, it simplifies manual review and directs attention to truly irregular transfers. This approach ensures that investigative focus is applied judiciously, improves accuracy, and enhances strategic use of resources in addressing cross-border financial concerns. | The expected benefit is faster recognition of outlier scenarios, focusing on meaningful transactions rather than all equally. | The AI features provide alerts, highlight unusual transfers, and explain why certain activities warrant closer observation. | a) Purchased from a vendor | Western Union | No | The AI features provide alerts, highlight unusual transfers, and explain why certain activities warrant closer observation. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0030 | Spokeo | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in Spokeo is to aggregate and enrich publicly accessible personal data to create comprehensive profiles. AI-based entity resolution, data linkage, and pattern recognition automate the task of piecing together scattered details. This significantly reduces manual workload, sharpens the accuracy of identifying individuals of interest, and ensures that investigative energies are invested where they can yield the most valuable insights. | The expected benefit is quicker access to comprehensive background details, easily identifying individuals of interest. | The AI features assemble contact info, historical records, and related data into a cohesive presentation. | a) Purchased from a vendor | Spokeo | No | The AI features assemble contact info, historical records, and related data into a cohesive presentation. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0031 | Thomson Reuters CLEAR | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | The purpose of the AI use case in Thomson Reuters Clear is to consolidate and analyze diverse records, creating coherent profiles of individuals and entities. By applying AI-driven data integration, entity resolution, and intelligent filtering, it automates traditionally manual tasks. This leads to faster identification of subjects of interest, improved accuracy in linking related data points, and a more strategic use of time and resources. Ultimately, AI integration enhances investigative effectiveness, reduces errors, and supports more targeted research. | The expected benefit is reduced manual effort, enabling quicker discovery of relevant profiles/relationships. | The AI features organize profiles, indicate links, and suggest deeper review areas, streamlining fragmented processes. | a) Purchased from a vendor | Thomson Reuters | No | The AI features organize profiles, indicate links, and suggest deeper review areas, streamlining fragmented processes. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0032 | TransUnion TLOxp Online Investigative Services | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The purpose of the AI use case in TransUnion TLOxp Online Investigative Services is to gather and analyze diverse personal, financial, and asset-related data more effectively. By implementing AI for entity resolution, risk scoring, and pattern detection, it replaces manual data handling with automated insights. This accelerates the discovery of meaningful leads, reduces errors, and ensures attention is concentrated on cases truly warranting further review, improving both accuracy and resource allocation. | The expected benefit is better time/effort use, focusing on subjects/data points that seem more meaningful. | The AI features highlight key personal details, indicate anomalies, and present data in a structured format for deeper inquiry. | a) Purchased from a vendor | TransUnion | No | The AI features highlight key personal details, indicate anomalies, and present data in a structured format for deeper inquiry. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0033 | Veritone Redact | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | The purpose of the AI use case in Veritone Redact is to support ATF in processing Freedom of Information Act (FOIA) requests by automating the redaction of sensitive information within audio and video files. | The expected benefits of the AI use case in Veritone Redact involve enhancing the efficiency and accuracy of redacting sensitive information within audio and video files. By automating the identification and redaction of personally identifiable information and other confidential content, the AI reduces the time and effort required from staff to process multimedia materials. This leads to faster turnaround times for releasing information, thereby reducing customer wait times for FOIA requests. The AI-driven redaction process also helps ensure compliance with privacy laws and regulations, minimizing the risk of inadvertently disclosing sensitive information. These efficiencies result in cost savings through reduced labor hours and improved resource allocation. Overall, the AI in Veritone Redact is expected to improve operational efficiency, enhance compliance, and support timely access to information for the public. | The AI system in Veritone Redact outputs recommendations and automated actions to support the redaction of sensitive information within audio and video files. It intelligently identifies personally identifiable information and other confidential content that may need to be redacted under legal exemptions. All suggested redactions are reviewed and approved by human staff to ensure compliance with legal standards. | a) Purchased from a vendor | aiWARE | Yes | The AI system in Veritone Redact outputs recommendations and automated actions to support the redaction of sensitive information within audio and video files. It intelligently identifies personally identifiable information and other confidential content that may need to be redacted under legal exemptions. All suggested redactions are reviewed and approved by human staff to ensure compliance with legal standards. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0034 | Whooster | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The purpose of the AI use case in Whooster is to integrate and analyze data from multiple sources to build comprehensive profiles on persons or entities of interest. AI-driven entity resolution, relationship mapping, and data enrichment help automate what would otherwise be manual, time-consuming cross-referencing. This improves the speed and accuracy of identifying relevant connections, ensures more targeted follow-up, and optimizes the allocation of investigative resources. | The expected benefit is more effective use of time, focusing on particularly relevant subjects. | The AI features compile profiles, highlight relationships, and present context, ensuring key info is easily accessible. | a) Purchased from a vendor | Whooster | No | The AI features compile profiles, highlight relationships, and present context, ensuring key info is easily accessible. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0035 | Commercial LPR (via HIDTA partners) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | This capability is not provided or managed by ATF. It provides ATF staff access to Flock license plate reader systems which are provided and managed by High Intensity Drug Trafficking Area (HIDTA) partner agencies. | License plate reader systems are used by law enforcement to assist with identification of vehicles associated with criminal investigations. ATF has no role in training or managing AI capabilities which are incidental to providing the service. | Alphanumeric characters and/or symbols associated with vehicle license plates | a) Purchased from a vendor | Flock Safety | No | Alphanumeric characters and/or symbols associated with vehicle license plates | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0036 | ELSAG/Leonardo (via HIDTA partners) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | This capability is not provided or managed by ATF. It provides ATF staff access to ELSAG license plate reader systems which are provided and managed by High Intensity Drug Trafficking Area (HIDTA) partner agencies. | License plate reader systems are used by law enforcement to assist with identification of vehicles associated with criminal investigations. ATF has no role in training or managing AI capabilities which are incidental to providing the service. | Alphanumeric characters and/or symbols associated with vehicle license plates | a) Purchased from a vendor | Leonardo US Cyber and Security Solutions LLC | No | Alphanumeric characters and/or symbols associated with vehicle license plates | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0038 | Thomson Reuters Vigilant Vehicle Manager | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The purpose of the AI use case in Thomson Reuters Vigilant Vehicle Manager is to intelligently organize, correlate, and analyze large volumes of license plate and vehicle sightings data. By automating the identification of recurring patterns and providing data-driven insights, it streamlines the workflow for identifying vehicles that may warrant attention. The AI capabilities reduce time-consuming manual reviews, improve accuracy in spotting significant trends, and help allocate investigative resources more effectively for timely and well-informed actions. | License plate reader systems are used by law enforcement to assist with identification of vehicles associated with criminal investigations. ATF has no role in training or managing AI capabilities which are incidental to providing the service. | The AI features produce overviews of vehicle activity, highlight recurring patterns, and point to areas for further review. | a) Purchased from a vendor | Motorola | No | The AI features produce overviews of vehicle activity, highlight recurring patterns, and point to areas for further review. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0039 | Veritone Illuminate | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Veritone Illuminate is a cloud-based application that provides machine translation and audio and video analysis transcription to aid human review in investigations. The product allows us to leverage artificial intelligence (AI) to systematically turn unstructured data (audio and video files) into structured data. The structured data can be easily searchable and provide more value to our cases. | Converts audio and video files to text searchable formats, and quickly translates multiple native languages into English-based text for more efficient review. | audio/video transcription text face detection (not used) object/scene detection results in images/videos text extraction from images speaker identification in audio pattern recognition results entity extraction | a) Purchased from a vendor | Veritone | Yes | audio/video transcription text face detection (not used) object/scene detection results in images/videos text extraction from images speaker identification in audio pattern recognition results entity extraction | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0040 | SAS (Forecasting, predictive analytics, data/statistical analysis) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Forecasting market conditions using various data sources for economic analyses. | Provides ATR with the ability to analyze, summarize, synthesize, and manage various data sources and data formats to support economic and statistical analysis of corporation and market conditions. ATR relies heavily on data analytical tools to support micro - and macro-level analysis. | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | ||||||||||||||||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0041 | R (Forecasting, predictive analytics, data/statistical analysis) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Forecasting market conditions using various data sources for economic analyses. | Provides ATR with the ability to analyze, summarize, synthesize, and manage various data sources and data formats to support economic and statistical analysis of corporation and market conditions. ATR relies heavily on data analytical tools to support micro - and macro-level analysis. | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | ||||||||||||||||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0042 | Stata (Forecasting, predictive analytics, data/statistical analysis) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Forecasting market conditions using various data sources for economic analyses. | Provides ATR with the ability to analyze, summarize, synthesize, and manage various data sources and data formats to support economic and statistical analysis of corporation and market conditions. ATR relies heavily on data analytical tools to support micro - and macro-level analysis. | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | ||||||||||||||||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0043 | Matlab (Forecasting, predictive analytics, data/statistical analysis) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Forecasting market conditions using various data sources for economic analyses. | Provides ATR with the ability to analyze, summarize, synthesize, and manage various data sources and data formats to support economic and statistical analysis of corporation and market conditions. ATR relies heavily on data analytical tools to support micro - and macro-level analysis. | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | ||||||||||||||||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0044 | Databricks (Forecasting, predictive analytics, data/statistical analysis) | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Client-side and SaaS products that contains fundamental modeling, AI and machine learning techniques for predictive modeling, natural language processing, computer vision and deep learning. Design and develop sophisticated economic models to analyze markets. | Provides ATR with the ability to analyze, summarize, synthesize, and manage various data sources and data formats to support economic and statistical analysis of corporation and market conditions. ATR relies heavily on data analytical tools to support micro - and macro-level analysis. | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | a) Purchased from a vendor | Databricks | Yes | predictive model outputs statistical analysis natural language processing text analytics forecasting predictions optimization recommendations risk analysis scores anomaly detection customer segmentation groupings decision tree analysis paths | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0045 | Salesforce | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | This initiative modernizes our current on-premise and matter management databases and storage systems by migrating to cloud-based platforms specifically architected for AI integration and enhanced system interoperability. Our existing applications and data are currently hosted on-premise systems that create fundamental barriers to AI implementation, limiting data accessibility, constraining scalability, and preventing the real-time data integration that modern AI tools require. | (1) Creates a replicable model for DOJ-wide adoption by establishing standardized cloud architecture and data integration frameworks that can be scaled across components. Demonstrates how breaking down legacy data silos enables coordinated, AI-driven insights and business intelligence capabilities that support department-wide resource allocation and investigative priorities. (2) Leverages JMD's established AWS Landing Zone to accelerate cloud adoption and builds upon existing data governance frameworks and system modernization efforts across DOJ components. The migration integrates with current cloud-based systems through the AWS Landing Zone infrastructure, enabling AI-powered business intelligence capabilities without duplicating technology investments. (3) Completing the migration to AI-native cloud infrastructure and deploying an AI-driven business intelligence dashboard that enables Division leadership to explore patterns and trends across the litigation portfolio. | Enabling Infrastructure: creates foundational capability for AI applications rather than direct AI outputs; enables other use cases to generate their respective predictions, recommendations, and automated actions. | a) Purchased from a vendor | No | Enabling Infrastructure: creates foundational capability for AI applications rather than direct AI outputs; enables other use cases to generate their respective predictions, recommendations, and automated actions. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | ||||||||||||||
| Department Of Justice | Department of Justice / COPS | DOJ-0046 | ChatBot | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The Chatbot responds to the questions, allowing staff to focus on other priorities. | The Intelligent Bot tool is implemented using Question and Answer (QnA) Maker and the Language Understanding (LUIS) services which is developed using the Microsoft Azure Software As A Service (SAAS) infrastructure in the Cloud on the COPS website. This will allow COPS to easily implement a brand new knowledge base of questions and answers feature on the site which will respond via a chat box based on a question entered by the customer. | Response | b) Developed in-house | No | Response | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / CRM | DOJ-0047 | BMC Helix ITSM | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | BMC Helix's ITSM AI capabilities include Proactive Problem Management and Incident Correlation, which will allow CRM's IT Service Desk to more efficiently identify issues, resolve incidents, automate case routing, and perform root cause analysis. | Increase IT service desk productivity, prevent issues before they occur, and decrease user downtime. | Prediction, recommendation | a) Purchased from a vendor | BMC | Yes | Prediction, recommendation | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / CRM | DOJ-0048 | Thomson Reuters CLEAR - License Plate Recognition | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | CLEAR LPR enables investigators to identify location history for license plates, connect addresses and individuals of interest to a vehicle's location and obtain images of a vehicle. CRM uses CLEAR LPR to aid in investigations. | Streamline investigations, reducing the need to search multiple platforms, while saving costs and allowing investigators to more quickly identify relevant information. | Object recognition, OCR, prediction | a) Purchased from a vendor | Thomson Reuters | No | Object recognition, OCR, prediction | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / CRM | DOJ-0050 | Veritone | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Improve transcription and translation of written, video, and audio files | Transcribe audio and video, saving manual review time and associated costs. | For text files that are transcribed into English the output contains the translated text. For Audio and video files a text file transcribing the translation is created as well as a closed captioning file. | a) Purchased from a vendor | Veritone | Yes | For text files that are transcribed into English the output contains the translated text. For Audio and video files a text file transcribing the translation is created as well as a closed captioning file. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / CRT | DOJ-0053 | AWS/cloud.gov - Network Routing | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Support public complaint submission by enabling network routing technology to optimize speed of routing and re-route traffic around any configuration issues. | Expected benefits: Helps sustain reliable networks access to CRT networked infrastructure by CRT staff and by Public users for complaint submissions. | Decision and action relating to network routing and load management. | a) Purchased from a vendor | Amazon | No | Decision and action relating to network routing and load management. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / CRT | DOJ-0054 | Azure Platform/Tools - Network Routing | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Decision and action relating to network routing and load management. | Expected benefits: Helps sustain reliable networks access to CRT networked infrastructure by CRT staff. | Decision and action relating to network routing and load management. | a) Purchased from a vendor | Microsoft | No | Decision and action relating to network routing and load management. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / CRT | DOJ-0055 | Camtasia | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Dynamic captions use AI tech within the local client software to convert speech to text and produce a transcript file. | The captioning is used based on user preference within their client app. The producing and retaining a transcript is disabled by CRT IT Policy. | Contemporaneous closed captioning. The transcription saving is disabled. | Contemporaneous closed captioning. The transcription saving is disabled. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CRT | DOJ-0056 | Cloudflare Turnstile | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Prevents bots from automatically submitting spam complaints through public web portal. | This should reduce the time required for reviewing reports submitted by the public, thus increasing efficiency in the report review process and improving public service. | Decision related to routing of a civil rights violation report submitted by a bot. | a) Purchased from a vendor | Cloudflare | Yes | Decision related to routing of a civil rights violation report submitted by a bot. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / CRT | DOJ-0057 | Dragon | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Use natural language processing to convert spoken audio to text for employees with vision or mobility limitations. | Increased accessibility. Employees with a disability are provided a reasonable accommodation. | Text transcription of spoken audio. Navigation of laptop operating system. | a) Purchased from a vendor | Nuance Communications | Yes | Text transcription of spoken audio. Navigation of laptop operating system. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / CRT | DOJ-0058 | Evidence.com | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | Use the auto-transcribe feature to transcribe body worn camera footage to text making it more easily searchable. Also use the redaction assistant feature to remove sensitive information from videos. | Improved speed of pre-processing workflows, faster identification of relevant audio, efficiency of investigations. | Text transcriptions, video with sensitive images redacted | Text transcriptions, video with sensitive images redacted | |||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0060 | DEA Drug Signature Program Models | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Understanding the manufacturing origin and distribution route of illicit drugs by analyzing the chemical composition of seized drugs is a core DEA task. The purpose of this use case is to use AI/ML techniques to develop, maintain, and improve models that enable designated forensic chemists to identify a notional geographic origin or notional manufacturing route of samples selected for DEA's Drug Signature Programs. | The solution supports designated forensic chemists at DEA to automate analyses and to more quickly identify trend changes regarding drug sample notional geographic origin or notional manufacturing route. | Designated forensic chemists responsible for a particular Signature Program are provided with the model's output - a notional geographic region of origin or a notional manufacturing route of samples -- which these forensic chemists then evaluate along with other available information to better understand drug trends. | c) Developed with both contracting and in-house resources | Yes | Designated forensic chemists responsible for a particular Signature Program are provided with the model's output - a notional geographic region of origin or a notional manufacturing route of samples -- which these forensic chemists then evaluate along with other available information to better understand drug trends. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0061 | LPR: DEA DEASIL Program | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | DEA needs an efficient and effective way to identify and track the movements of persons of interest based on vehicle license plates. | License Plate Readers (LPR) can be one important investigative tool to support cases. | This License Plate Reader (LPR) technology captures images of vehicle license plates, automatically converts the information they contain to characters, and can add metadata such as the time and location the image was captured. | a) Purchased from a vendor | Yes | This License Plate Reader (LPR) technology captures images of vehicle license plates, automatically converts the information they contain to characters, and can add metadata such as the time and location the image was captured. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0062 | LPR: State Partner | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | License Plate Readers (LPR) can be one important investigative tool to support understanding of drug markets, manufacturing, and distribution channels. | License Plate Readers (LPR) can be one important investigative tool to support cases. | This License Plate Reader (LPR) technology captures images of vehicle license plates, automatically converts the information they contain to characters, and can add metadata such as the time and location the image was captured. | a) Purchased from a vendor | No | This License Plate Reader (LPR) technology captures images of vehicle license plates, automatically converts the information they contain to characters, and can add metadata such as the time and location the image was captured. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0063 | LPR: Federal Partner | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | DEA needs an efficient and effective way to identify and track the movements of persons of interest based on vehicle license plates. | License Plate Readers (LPR) can be one important investigative tool to support cases. | This License Plate Reader (LPR) technology captures images of vehicle license plates, automatically converts the information they contain to characters, and can add metadata such as the time and location the image was captured. | a) Purchased from a vendor | No | This License Plate Reader (LPR) technology captures images of vehicle license plates, automatically converts the information they contain to characters, and can add metadata such as the time and location the image was captured. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0064 | LPR: Commercial Solution | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | License Plate Readers (LPR) can be one important investigative tool to support understanding of drug markets, manufacturing, and distribution channels. | License Plate Readers (LPR) can be one important investigative tool to support cases. | This License Plate Reader (LPR) technology captures images of vehicle license plates, automatically converts the information they contain to characters, and can add metadata such as the time and location the image was captured. | a) Purchased from a vendor | No | This License Plate Reader (LPR) technology captures images of vehicle license plates, automatically converts the information they contain to characters, and can add metadata such as the time and location the image was captured. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0065 | Friction Ridge Print Comparisons | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | a) High-impact | High-impact | Classical/Predictive Machine Learning | DEA, when conducting fingerprint analysis to identify individuals who may be connected to evidence, needs to be able to compare friction ridge prints to other prints within the boundaries of a case. Product enables linking of cases where individuals are not necessarily identified. | This use case saves time and provides information for human decision-making. | Outputs images and portions of print cards. | a) Purchased from a vendor | Yes | Outputs images and portions of print cards. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0066 | Automated Count of Items in Photos | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | DEA needs a way to validate the drug forms, shapes, sizes, and counts contained in drug seizure exhibits so that this information can be effectively used in court proceedings. | To ensure timely high confidence of counts, this use case allows forensic scientists to accelerate validation and enables quality control checks by serving as a unbiased count against submitted paperwork and manual counts. This will reduce a labor and resource intensive counting process. | Outputs recommended counts with image labels. | Outputs recommended counts with image labels. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0067 | Body Worn Camera (BWC) Audio-Video Software AI Tools | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0068 | Supply Chain Analytics | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | DEA has a mission to protect communities and save lives. It requires an understanding of global drug markets, manufacturing, and distribution channels as well as the impact of illicit drugs on communities and individuals. Supply chain analytics is a tool to further open and active investigations and protect the American public. | Supply chain information about goods associated with drug manufacturing and trafficking can further investigative leads and allow dedicated DEA personnel to track global trends, determine the impact of market forces, and understand import/export stakeholders. This is particularly critical for DEA's work on precursor chemicals used in the production of synthetic drug like fentanyl. | Supply chain analytical outputs vary by query and generally focus on specific markets or business entities. | a) Purchased from a vendor | No | Supply chain analytical outputs vary by query and generally focus on specific markets or business entities. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0069 | Data & Analytics: Database Tools | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | DEA needs a way to search, monitor, and analyze a wide variety of public-facing unstructured and structured data sources such as scientific literature, social media, dark web, news, public records, public internet forums, and more in furtherance of investigations. | This use case enables DEA to gain key insights at exponentially higher speeds with lower costs by fusing variety of data sources. Benefits include: operational insights, emerging threat detection, agent safety, public safety, improved mission-enabling services, etc. | Outputs vary by use case. | a) Purchased from a vendor | No | Outputs vary by use case. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0071 | Data & Analytics: Healthcare Fraud Detection | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | DEA needs a way to facilitate analysis of health care fraud and abuse, especially those impacting government health programs. Federal government healthcare insurance programs include Medicare, Medicaid, TRICARE, VA, and others. The Health Care Fraud and Abuse Control Program (HCFAC) was created to unite DOJ and HHS in their efforts to combat fraud. | Enhances the detection and prevention of health care fraud and abuse crimes within the context of DEA's mission. | Outputs include data visualizations, as well as trend, benchmark, and link analyses. | c) Developed with both contracting and in-house resources | No | Outputs include data visualizations, as well as trend, benchmark, and link analyses. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0072 | Data & Analytics: Threat and Security Incident Monitoring | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | DEA wants to improve its real-time collaboration, executive protection, and agent safety with data-driven situational awareness, and identify potential threats using entity resolution to analyze multiple commercial and government data feeds. | This use case enables DEA to gain key insights at exponentially higher speeds with lower costs by fusing variety of data sources on potential threats from both insiders and external actors and to assist internal monitoring of multiple devices for real-time collaboration. This use case provides entity resolution, taking information entered by DEA analysts and scanning data sources for likely matches with risk indicators, and returning those results to analysts for review. | Outputs vary by use case. | a) Purchased from a vendor | No | Outputs vary by use case. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0073 | Data & Analytics: Transportation OCR | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | DEA needs a reliable, accurate way to identify the cartel or other organization associated with a seized drug exhibit. | AI capabilities can enable DEA to identify cartels or others organizations associated with drug seizures. This could improve DEA's understanding of drug trends. | The technology can "fingerprint" drug seizures of various forms (e.g. packaging, powder, pills). The output is a series of best matches for experienced DEA personnel to evaluate for possible matches to associated organizations or details from previously collected drug seizures. | The technology can "fingerprint" drug seizures of various forms (e.g. packaging, powder, pills). The output is a series of best matches for experienced DEA personnel to evaluate for possible matches to associated organizations or details from previously collected drug seizures. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0081 | Data & Analytics: Chemistry Instrument Library | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | DEA needs a way to identify unknown drug samples, which can be done by comparing them against a library of known spectra for the best match. | For less commonly encountered compounds, this use case saves time and provides information for human decision-making. The comparison/matching also fulfills requirements to provide spectra from known/traceable materials. | Outputs a recommended list of best matches to be decided upon by the analyst and included in the case file as supporting evidence for the identified substances reported. | c) Developed with both contracting and in-house resources | Yes | Outputs a recommended list of best matches to be decided upon by the analyst and included in the case file as supporting evidence for the identified substances reported. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0082 | Data & Analytics: Gunshot Detection System | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Rapidly locate gunshots and activity of potential investigative interest to alert relevant law enforcement agencies. | Rapid detection of gunshots, which can help to decrease the time to respond to a violent crime. | Outputs sensor reports of gunshots and activity of investigative interest. | a) Purchased from a vendor | No | Outputs sensor reports of gunshots and activity of investigative interest. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0083 | Financial and Cryptocurrency Analysis: Federal Partner | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Intelligence Analysts and DEA agents require tools that enable them to accelerate triage, discovery, investigations, and reporting while preserving the context, control, and credibility necessary to make their intelligence actionable. These tools must also support the rapid processing of raw, primary source content to provide useful insights and facilitate the identification of patterns and relationships in financial transactions, including cryptocurrencies, to aid in drug trafficking and money laundering investigations. | Saves time and money by freeing up Intelligence Analysts to focus on higher-order analysis, while enhancing the detection and prevention of financial crimes related to ongoing investigations. | Provides summary materials, on-demand briefings from a chatbot, target profiles, and language search results, along with outputs such as recommended search results, link analysis, alerts, and data visualizations. | c) Developed with both contracting and in-house resources | No | Provides summary materials, on-demand briefings from a chatbot, target profiles, and language search results, along with outputs such as recommended search results, link analysis, alerts, and data visualizations. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0084 | Financial and Cryptocurrency Analysis: Commercial Solution | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Intelligence Analysts and DEA agents require tools that enable them to accelerate triage, discovery, investigations, and reporting while preserving the context, control, and credibility necessary to make their intelligence actionable. These tools must also support the rapid processing of raw, primary source content to provide useful insights and facilitate the identification of patterns and relationships in financial transactions, including cryptocurrencies, to aid in drug trafficking and money laundering investigations. | Saves time and money by freeing up Intelligence Analysts to focus on higher-order analysis, while enhancing the detection and prevention of financial crimes related to ongoing investigations. | Provides summary materials, on-demand briefings from a chatbot, target profiles, and language search results, along with outputs such as recommended search results, link analysis, alerts, and data visualizations. | c) Developed with both contracting and in-house resources | No | Provides summary materials, on-demand briefings from a chatbot, target profiles, and language search results, along with outputs such as recommended search results, link analysis, alerts, and data visualizations. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0086 | Controlled Substances Act (CSA): Automation of Reports and Consolidated Orders System (ARCOS) Data Summarization | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | DEA needs a way to automate the validation, summarization, and outlier identification of ARCOS data for further analysis. | To save time and effort of fully manual review of ARCOS data. | Outputs a list of recommendation with validated data with summary information on the detected outliers. | c) Developed with both contracting and in-house resources | Yes | Outputs a list of recommendation with validated data with summary information on the detected outliers. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0087 | Controlled Substances Act (CSA): Transaction Data Ranking | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | DEA needs a way to rank CSA transaction data based on the factors identified/calculated as part of a manual analysis. | To reduce time and effort of manually ranking. | Outputs a data visualization that highlights activity, based on certain factors, and includes a ranking. | c) Developed with both contracting and in-house resources | Yes | Outputs a data visualization that highlights activity, based on certain factors, and includes a ranking. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0088 | Intelligence Data Platform | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | DEA has a mission to protect communities and save lives. It requires a need to understand communications within criminal operations. Multi-language machine transcription of audio files from lawfully seized devices, authorized correctional facility, and other authorized communications with necessary English translation will filter massive audio files to relevant data for review. | Expedites investigations as audio files can be easily searched and filtered to quickly identify which parts of the conversations should be reviewed and interpreted officially by human translators and analysts for discovery purposes. | Outputs vary by use case. | a) Purchased from a vendor | PenLink PLX | Yes | Outputs vary by use case. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0089 | Call Center Management and Service Delivery Support | a) Pre-deployment – The use case is in a development or acquisition status. | Government Benefits Processing | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | DEA wants to improve the effectiveness of the Diversion Control Registrant Call Center by automating the routing of incoming calls, providing answers to common questions for Diversion Control Registrant call center agents to identify and quickly propose steps toward resolution, and monitoring customer feedback. | To expedite responding to registrant needs with timeliness and consistency of DEA's customer service posture to DEA registrants. To decrease labor costs while maintaining high levels of customer service to DEA registrants. To capture needed improvements to service for implementation. | Outputs a recommended course of action for review by DEA staff along with a prioritization classification and provide customer response metrics. | Outputs a recommended course of action for review by DEA staff along with a prioritization classification and provide customer response metrics. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0090 | Generative AI R&D Sandbox | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | a) High-impact | High-impact | Generative AI | The purpose of this use case is to create a R&D environment to enable DEA to test and prototype Generative AI based solutions. | To provide a safe environment to prototype and test experimental AI based use cases. | Outputs vary based on the use cases. | a) Purchased from a vendor | NVIDIA | Yes | Outputs vary based on the use cases. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | ||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0091 | Instructional System Design Online Courses | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | DEA has a mission to protect communities and save lives. The purpose of this use case is to generate images for use in training materials. | To reduce the amount of time and resources needed to develop training aids and graphical inserts for online Computer Based Training (CBT). | Outputs graphical content based on the queries entered by users. | Outputs graphical content based on the queries entered by users. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0092 | Autonomous Drone Detection and Monitoring | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | DEA needs drones that can safely and effectively navigate autonomously | This use case supports human drone operators and pilots in deploying drones by collecting data. It minimizes the need for costly, extensive training and allows drone operators to focus on real-time operational needs. It supports post-deployment data analysis. | Outputs high-resolution imaging and thermal imaging data, video feeds, 3D mapping, reports and analytics on drones and its communication links. | a) Purchased from a vendor | No | Outputs high-resolution imaging and thermal imaging data, video feeds, 3D mapping, reports and analytics on drones and its communication links. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0093 | Nuclear Magnetic Resonance (NMR) Spectra Prediction | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | DEA has a mission to protect communities and save lives. Phase and baseline corrections are important processing steps in the analysis of Nuclear Magnetic Resonance (NMR) spectra. Deep learning achieves excellent results in recognition and segmentation tasks, supporting users with spectra processing and interpretation. | Fast processing and interpretation of nuclear magnetic resonance spectra. | Outputs predicted labels for the spectra. | Outputs predicted labels for the spectra. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0094 | Automated IT Services and Application Monitoring | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0100 | BriefCatch | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Addresses errors found in manual legal writing. | Aids attorneys with grammar and sentence structure to enable stronger written product. | Briefs, summaries, and any other written work product. | a) Purchased from a vendor | Lawcatch LLC | No | Briefs, summaries, and any other written work product. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0101 | Cybersecurity Defense Tools | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Addresses manual process of detecting unknown cybersecurity threats. | Certain tools are used Department-wide to provide endpoint detection; threat intelligence, analysis, and response; and related services. These tools help the Department more quickly identify and respond to threats and indicators of compromise from systems. | Recommendations related to potential IT security threats. | a) Purchased from a vendor | CrowdStrike, Zscaler, Splunk, Lookout, Palo Alto Networks, and Cisco Secure Network Analytics | Yes | Recommendations related to potential IT security threats. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0102 | eLitigation Tools | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Addresses errors and speed delays found in exclusively manual human review of voluminous electronic information. | Department of Justice components use electronic litigation (“eLitigation”) tools for a broad range of purposes, including support of investigations, litigation, and FOIA and Privacy Act processes. Most of these tools are commercial-off-the-self products that are commonly used outside of government, such as Everlaw, FOIAXpress, and Relativity. These tools increasingly integrate AI capabilities that can assist with tasks core to the mission of the Department, such as surfacing potentially discoverable information in voluminous collections of emails, text messages, or other electronic records; locating potentially inculpatory or exculpatory evidence in voluminous electronic data; and identifying material that may be appropriate for disclosure or withholding according to applicable legal rules and privileges. eLitigation tools can offer substantial benefits over exclusively human review of voluminous electronic information: they can be faster, more accurate and consistent, and more efficient. Please note: These tools are used in contexts that are high-impact, but the nature and details of AI uses vary, which may affect whether particular uses are high impact. | Outputs vary by use case. | a) Purchased from a vendor | CloudNine Law, Everlaw, Relativity, and Nuix. | Yes | Outputs vary by use case. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0103 | Digital Forensics | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Addresses key problems in processing digital data related to law enforcement including efficiency in processing digital evidence, automating time consuming tasks such as organizing and classifying data, and accuracy in evidence analysis. | Forensic analysis tools used to extract, analyze, search, and organize digital evidence and datasets. Increases the efficiency of extracting data from devices and of analyzing/searching for pertinent data within devices and datasets. | Outputs vary by use case. | a) Purchased from a vendor | These tools include, for example, Cellebrite, Magnet Axiom and Griffeye | Yes | Outputs vary by use case. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0105 | LexisNexis (AI assisted legal research) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Addresses manual process of conducting legal research. | Improves accuracy and efficiency of legal research. | Recommends caselaw and other legal materials (such as statutes, regulations, and scholarly articles) and, in some circumstances, an overview of the law in response to queries. | a) Purchased from a vendor | LexisNexis | Yes | Recommends caselaw and other legal materials (such as statutes, regulations, and scholarly articles) and, in some circumstances, an overview of the law in response to queries. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0106 | Percipio Skillsoft | a) Pre-deployment – The use case is in a development or acquisition status. | Human Resources | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Helps employees identify relevant training courses. | Interactive tool customizes learning environments for the workforce's individual needs. | Training recommendations. | Training recommendations. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0107 | ServiceNow | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | ServiceNow automates IT helpdesk tickets triage and classification and can handle routine inquires and simple tasks to free up resources to focus on higher priority issues. | To automate engagement with IT helpdesk personnel for IT requests. ServiceNow is a cloud-based platform that digitizes and automates workflows. It provides IT support to common and simple requests 24/7, and virtual agents handle common inquires that free staff for higher priority issues. Consistent processing could help reduce errors and increase efficiency. | Automated service requests Incident resolution recommendations Virtual agency responses Data insights and analytics reports Search results and recommendations Documented and categorized knowledge-based articles. | a) Purchased from a vendor | ServiceNow | Yes | Automated service requests Incident resolution recommendations Virtual agency responses Data insights and analytics reports Search results and recommendations Documented and categorized knowledge-based articles. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0109 | Westlaw (AI assisted legal research) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Addresses manual process of conducting legal research. | Performs legal research and improves accuracy and efficiency of legal research. | Recommends caselaw and other legal materials (statutes, regulations, scholarly articles, etc.) and, in some circumstances, an overview of the law in response to queries. | a) Purchased from a vendor | Thomson Reuters | Yes | Recommends caselaw and other legal materials (statutes, regulations, scholarly articles, etc.) and, in some circumstances, an overview of the law in response to queries. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / ENRD | DOJ-0111 | Parallel Search from CaseText | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / ENRD | DOJ-0112 | Qualtrics | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Qualtrics is a cloud-based service that can send out survey and record responses. ENRD uses it to send out questionnaires to its own employees to gather feedback on issues such as training needs and student feedback on training sessions. ENRD also uses Qualtrics to collect victim impact statements and to collect public input in environmental justice matters. Qualtrics includes a sentiment analysis feature, but ENRD has not used it. Sentiment analysis can review narrative responses from respondents and summarize the overall sentiment of a group of respondents and surface insights and issues from a large number of responses. | It can provide immediate analysis of respondent sentiment and surface insights. It can reduce the number of hours and effort to read and score narrative responses and summarize the overall opinion on the subject. | Survey responses and reports summarizing data from responses. | Survey responses and reports summarizing data from responses. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / ENRD | DOJ-0113 | SimplyFile | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | SimplyFile uses AI to learn a user’s email filing habits and then predict where they will want to move each email by displaying a list of suggested filing locations they can select from. This allows users to file their emails with a single button click, rather than having to click and drag them to the correct folders. | Faster and more accurate email filing | A sorted list of predicted filing locations (Outlook folders) | a) Purchased from a vendor | TechHit | Yes | A sorted list of predicted filing locations (Outlook folders) | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / ENRD | DOJ-0114 | Veritone | a) Pre-deployment – The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Improve transcription and translation of written, video, and audio files | Faster screening of large volumes of electronic materials or information that may be of interest in an investigation or discovery. | For text files that are transcribed into English the output contains the translated text. For Audio and video files a text file transcribing the translation is created as well as a closed captioning file. | For text files that are transcribed into English the output contains the translated text. For Audio and video files a text file transcribing the translation is created as well as a closed captioning file. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / EOUSA | DOJ-0116 | Critical Mention | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | Identification of public data | Reduction in time required to update | Summary of publicly available information | a) Purchased from a vendor | Critical Mention | No | Summary of publicly available information | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / EOUSA | DOJ-0117 | Evidence.com | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | Inefficient and costly manual evidence review | Cost Savings, reducing court preparation times | Reports, narratives, and summaries | a) Purchased from a vendor | Axon | No | Reports, narratives, and summaries | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / EOUSA | DOJ-0118 | Flashpoint Ignite | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / EOUSA | DOJ-0119 | Insider Threat Management and User Activity Monitoring | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | Identification and prevention of anomalous user behavior too subtle for human detection and analysis | Usage of this system proactively identifies and prevents costly data breaches, reducing incident response time, and gives EOUSA and the USAO comprehensive insight of user behavior to maintain security and compliance | Analytics and predictive models | a) Purchased from a vendor | No | Analytics and predictive models | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0120 | Object Classification Tool - Field Office Security Camera | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Object anomaly detection | Enhanced security | Security notifications based on anomaly detection | a) Purchased from a vendor | AI Model Provider | No | Security notifications based on anomaly detection | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0121 | Data Synthesis, Sentiment, Filtering, and Location Linking | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Data triage and analysis | Enhanced threat information detection | Tagged data for further confirmation, research, and analysis | a) Purchased from a vendor | No | Tagged data for further confirmation, research, and analysis | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0122 | Facial Recognition Technology | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Facial recognition | Generation of investigative leads | Potential leads through suggested facial matches | b) Developed in-house | No | Potential leads through suggested facial matches | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0123 | OCR and Translation | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Transcription and translation | The AI will help the FBI digitize data. | Digital text | a) Purchased from a vendor | Yes | Digital text | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0124 | License Plate Reader 1 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | License plate reader technology assists in locating vehicles associated with persons of interest. | Program uses character recognition to read and identify license plates. | Video | a) Purchased from a vendor | AI Service Provider | No | Video | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0125 | Enterprise Telecommunications Information System | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Speech to text. | Reduced customer wait time. | text | a) Purchased from a vendor | Yes | text | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0126 | Facial Recognition Technology and Data Mapping Tools | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Facial recognition of open-source images | Generation of investigative leads | Potential matches for human review | a) Purchased from a vendor | No | Potential matches for human review | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0127 | Human Language Extraction and Translation Technology | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Machine translation of documents and digital files | Faster FBI operations | Extracted and transcribed / translated text from documents and digital files | c) Developed with both contracting and in-house resources | AI Service Provider | Yes | Extracted and transcribed / translated text from documents and digital files | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0128 | Attrition and Background Models Capability | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0129 | Audio and Video Recording Management Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Prevents loss of required audio and video data within FBI custodial interview rooms | Tool facilitates audio and video recording within FBI custodial interview rooms, consistent with DOJ and FBI policy. | Audio and video recordings | a) Purchased from a vendor | No | Audio and video recordings | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0130 | License Plate Reader 2 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | License plate reader technology assists in locating vehicles associated with persons of interest. | Program uses character recognition to read and identify license plates. | Video | a) Purchased from a vendor | AI Service Provider | No | Video | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0131 | License Plate Reader 3 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | License plate reader technology assists in locating vehicles associated with persons of interest. | Program uses character recognition to read and identify license plates. | Video | a) Purchased from a vendor | AI Service Provider | No | Video | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0132 | National Crime Information Center | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Name matching | The models can produce much quicker searches on word embeddings than they would otherwise be able to do, and can generate a larger return of similar phrases or misspellings. | Better search results | Better search results | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0133 | National Data Exchange | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Helps uncover valuable insights and connections in the text of criminal justice and law enforcement related information in the system. | The benefit of this use case is to provide an entity extraction feature to aid N-DEx searches so that users can accurately search for people and filter the search results based on a specified role type. | Person entity information from narrative text for lead generation | a) Purchased from a vendor | Yes | Person entity information from narrative text for lead generation | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0134 | National Instant Criminal Background Check System | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Time-intensive review of database search results | Improved quality of search results for NICS analysts when they examine state law databases for laws relevant to a NICS background check. More effective searches on word embeddings will better identify patterns in the data. | Potential leads for human review | Potential leads for human review | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0135 | Next Generation Identification | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The AI is intended to improve biometric and name-based matching for identification and investigation services. | The AI provides more accurate biometric and name-based matching | Biometric identification and search results containing candidates for potential investigative leads. | a) Purchased from a vendor | Yes | Biometric identification and search results containing candidates for potential investigative leads. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0136 | TIPS | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | It is intended to prioritize tips to be worked as well as determine if a tip should get a second human review. | The AI used in this case helps to triage immediate threats in order to help FBI field offices. | The system will prioritize the tips or route them to a second review based on thresholds. | c) Developed with both contracting and in-house resources | Yes | The system will prioritize the tips or route them to a second review based on thresholds. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0137 | Facial Recognition Technology and Data Mapping Tools - 2 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Identification of sex trafficking victims | Generation of investigative leads | Potential leads through suggested facial matches | a) Purchased from a vendor | No | Potential leads through suggested facial matches | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0138 | Language Translation, Optical Character Recognition (OCR), Object Detection, Language Detection, Alert Noise Reduction | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Autonomous Detection and Monitoring | Increased analytic capability | Capabilities to support data and analytics, data synthesis, filtering, and linking of open source & threat intelligence. | a) Purchased from a vendor | AI Service Provider | No | Capabilities to support data and analytics, data synthesis, filtering, and linking of open source & threat intelligence. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0142 | ASCVD (Atherosclerotic Cardiovascular Disease) Risk Estimator | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The ASCVD is used as a preventative measure to identify inmates that are at risk for heart disease in order to provide more aggressive treatment to reduce that risk. | The expected outcome would be to have less heart disease or related complications due to the proactive assessment of potential risk. | An estimated percentage of the possibility of the inmate developing heart disease over the next ten years and over the inmate's lifetime. | a) Purchased from a vendor | American College of Cardiology | No | An estimated percentage of the possibility of the inmate developing heart disease over the next ten years and over the inmate's lifetime. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0143 | Automated Medication Dispensing Cabinet | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | To log into the system faster and more securely by utilizing a fingerprint scanner. | To prevent unauthorized access to the medications held in the cabinet. | Assesses whether the fingerprint scanned matches the one in the system for each specific employee. | a) Purchased from a vendor | BD Pyxis | Yes | Assesses whether the fingerprint scanned matches the one in the system for each specific employee. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0144 | Automated Staffing Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | It will assess staffing levels within the FBOP. | The expected benefit of the AI use case is to maximize cost effectiveness of staffing needs for each institution. | Output reports provide how many positions are currently authorized within the FBOP. | a) Purchased from a vendor | Microsoft | No | Output reports provide how many positions are currently authorized within the FBOP. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0145 | Aztec Learning Software | a) Pre-deployment – The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Provide learners with data driven course work in Aztec LMS. | Prediction: Reports of trends, new authentic. | Implementation and Assessment – The AI system associated with the use case is currently undergoing functionality and security testing. | Implementation and Assessment – The AI system associated with the use case is currently undergoing functionality and security testing. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0146 | BRAVO Classification | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Uses statistical techniques to predict potential for misconduct for newly admitted inmates. In turn, this prediction is used to assign appropriate security levels. | Correctly classifying inmates' security level will decrease the level of misconduct towards other inmates and staff. | AUC score showing the degree to which the instrument correctly discriminates between those who commit misconduct and those who do not. | a) Purchased from a vendor | SAS | Yes | AUC score showing the degree to which the instrument correctly discriminates between those who commit misconduct and those who do not. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0147 | Building Automation Systems | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | An automatic control of a building's HVAC lighting and other systems through a centralized building management system. | Reduce energy consumption and waste, monitor performance, and alerts for device failures. | It will result in alerts. | a) Purchased from a vendor | No | It will result in alerts. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0149 | Community Treatment Pipeline Screening | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | This is an FBOP-developed tool to identify individuals due to be released to the community who require clinical review for additional treatment services once released into pre-release confinement. | Reduces clinical reviews by a third by prescreening those with no SENTRY or BEMR indicators necessitating potential treatment needs. | Checkmarks for potential types of treatment needs found in SENTRY or BEMR. | a) Purchased from a vendor | SAS | Yes | Checkmarks for potential types of treatment needs found in SENTRY or BEMR. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0150 | Descript | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | AI synchronizes audio files to video | Video to be used for communications and training purposes | Output is an video file (e.g., MP4) | a) Purchased from a vendor | Descript | No | Output is an video file (e.g., MP4) | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0153 | Google Translate | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | FBOP employees have periodically used Google Translate to translate documents and notifications for the Spanish speaking population from English to Spanish. | Translating documents from English to Spanish allows for the Spanish speaking population the opportunity to stay well informed with the events and details of the institution. | Google translate provides a word for word translation of the information entered though it does not always accurately translate in a given context. | a) Purchased from a vendor | No | Google translate provides a word for word translation of the information entered though it does not always accurately translate in a given context. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0155 | InterQual | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0156 | Medical Claims Adjudication | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Uses Quantum Choice (QC) to adjudicate medical claims and analyze the data within the system. | Cost savings and ensure compliance with billing regulations and contract pricing terms. It also provides data analysis to assist with the FBOP's mission. | Medical Billing Payment decisions are made utilizing the AI. | a) Purchased from a vendor | Quantum Choice from Plexis | No | Medical Billing Payment decisions are made utilizing the AI. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0157 | Medical Designations Calculator | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Initial medical and mental health care levels. | Reducing the time to determine a final medical and mental health care level. | A medical and mental health care level score (1-4) | a) Purchased from a vendor | SAS | No | A medical and mental health care level score (1-4) | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0158 | NLETs Arrest Classification | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Uses statistical techniques to classify text descriptions of arrests | Correctly classifying arrest records to inform recidivism assessments | Classifies arrests into offense types | a) Purchased from a vendor | SAS | No | Classifies arrests into offense types | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0159 | Pathfinder | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Human Resources | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | To assist FBOP employees with career pathways. | The system generates assessments based on user input. It scores the assessments taken to provide a user with options for career pathways. | Provides employees with career options based on assessments. | a) Purchased from a vendor | Azure | Yes | Provides employees with career options based on assessments. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / OJP | DOJ-0160 | Prisoner Assessment Tool Targeting Estimated Risk and Needs (PATTERN) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Agentic AI | The intended use is to predict the risk of recidivism for incarcerated adults. | Addressing the risk of recidivism with appropriate programming and services to reduce the likelihood of reengagement with the justice system. | Recidividism risk calculations. | b) Developed in-house | No | Recidividism risk calculations. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0160 | Prisoner Assessment Tool Targeting Estimated Risk and Needs (PATTERN) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The tool uses pre-defined rules to score an inmate's recidivism risk level. | The tool provides the FBOP a recidivism risk instrument which objectively assesses an inmate's current level of risk for re-offending. | Recidivism risk score. | b) Developed in-house | Yes | Recidivism risk score. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0161 | Psychological Test Interpretation - Pearson Assessments | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Psychological test interpretation assistance. | Assistance during psychological evaluations and treatment planning. | Text interpretive reports. | a) Purchased from a vendor | Pearson Assessments | Yes | Text interpretive reports. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0162 | reCAPTCHA | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Uses reCAPTCHA to distinguish between a human and bot request. | It provides verification and security to our websites. | Decision: It determines if the request is coming from a human or bot to allow the request to be submitted via email to the FOIA office or through the Inquiry Portal. | a) Purchased from a vendor | Yes | Decision: It determines if the request is coming from a human or bot to allow the request to be submitted via email to the FOIA office or through the Inquiry Portal. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0163 | Static-99 Sex Offender Data System (SODS) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Static-99 is used as one data point in a 10 point manual process to evaluate FBOP sex offender's risk for recidivism. The tool uses pre-defined rules to score an inmate's recidivism risk level. | To assist the evaluator in the recidivism risk review process by providing a key data point used in the overall evaluation. | A score indicating the inmate's risk for recidivism of a sexual offense. | b) Developed in-house | Yes | A score indicating the inmate's risk for recidivism of a sexual offense. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0164 | The R Project for Statistical Computing | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0165 | Thomson Reuters Drafting Assistant Tool for MS Word | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0166 | TruNarc, Smith Detection | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | This AI will test suspected mail for narcotics. | It is utilized to confirm staff visual identification of narcotic through institution mail. | This system will provide reports on test results. | a) Purchased from a vendor | TruNarc, Smiths Detection | No | This system will provide reports on test results. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0167 | Trunet Systems | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Uses biometric (fingerprints and vocal) identification. | It is an additional security feature that allows only that particular adult in custody (AIC) access to their individual account. | This technology is used to provide inmates access to their Trust Fund Account information | a) Purchased from a vendor | Advanced Technologies Group | Yes | This technology is used to provide inmates access to their Trust Fund Account information | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0168 | Truview | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Analysis of data held within the TRU, visiting, case management, and volunteer systems. | To assist with the FBOP's mission by completing assessments and analysis of the data input into the TRU, visiting, case management, and volunteer systems. | Provides users with actionable information for investigative purposes. | a) Purchased from a vendor | Advanced Technologies Group | Yes | Provides users with actionable information for investigative purposes. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0169 | UAS Threat Detector | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | Detects and analyzes unmanned aerial systems (UAS) near FBOP facilities to determine if the UAS is a threat to an institution. Provides reliable threat detection as part of FBOP’s overall mission to provide safe and secure facilities. | To maintain security of FBOP institutions. The benefits of such detections serve FBOP’s mission to protect society by confining offenders in the controlled environments of prisons and community-based facilities that are safe and appropriately secure. | Outputs provide UAS identification and location information. | a) Purchased from a vendor | No | Outputs provide UAS identification and location information. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0170 | UpToDate | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | This tool is used for research for clinical practice guidance. The user can submit a condition and the tool compiles a list of treatments and information that is currently being recommended in the medical field. | Improved Patient Outcomes | Information, summaries, links to published peer-reviewed research, and treatments. | a) Purchased from a vendor | UpToDate | No | Information, summaries, links to published peer-reviewed research, and treatments. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0171 | Veritone | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0172 | Wellsaid | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | AI converts text to speech audio that is used in social media, training videos, and audiobooks. | Audio to be used for communications and training purposes | Output is an audio file (e.g., MP3) | a) Purchased from a vendor | Wellsaid | No | Output is an audio file (e.g., MP3) | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0173 | Perimeter Detection Fence (FLIR) | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD | DOJ-0174 | Exiger Supply Chain Risk Management - DDIQ Research Engine | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Provides DOJ with capabilities to perform Supply Chain Risk Management assessments to support agency cybersecurity posture. | Provides DOJ vendor profiles that aggregate data from open source repositories using API keys. This data gives DOJ information to make an informed decisions on whether or not to move forward with the acquisition of a goods/services based on the risks identified during research. | List of outputs include additional data and risk scores such as Foreign Ownership Control or Influence (FOCI), Reputational Criminal Regulatory (RCR), Financial Health Risk | a) Purchased from a vendor | Exiger | Yes | List of outputs include additional data and risk scores such as Foreign Ownership Control or Influence (FOCI), Reputational Criminal Regulatory (RCR), Financial Health Risk | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / JMD | DOJ-0175 | Exiger Supply Chain Risk Management - DDIQ Due Diligence Analytics | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Cybersecurity | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Provides DOJ with capabilities to perform Supply Chain Risk Management assessments to support agency cybersecurity posture. | Provides DOJ with analytical risk dashboards and company cybersecurity scorecards which provides DOJ with capabilities to make an informed decisions on whether or not to move forward with the acquisition of a goods/services based on the risks identified during research. | List of outputs include additional data and risk scores such as Foreign Ownership Control or Influence (FOCI), Reputational Criminal Regulatory (RCR), Financial Health Risk | a) Purchased from a vendor | Exiger | Yes | List of outputs include additional data and risk scores such as Foreign Ownership Control or Influence (FOCI), Reputational Criminal Regulatory (RCR), Financial Health Risk | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / Department wide | DOJ-0177 | CoPilot | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Features assist with calendaring, meeting summaries, and email drafting. | Reduces administrative time for user. | Meeting summaries and proposed email responses. | Meeting summaries and proposed email responses. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD | DOJ-0179 | Savan Group Intelligent Records Consolidation Tool | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / NSD | DOJ-0181 | Salesforce | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Addresses the problem of complex data analysis and decision making by providing AI-driven capabilities that simplify and accelerate the process. | Purpose: Provides contextual insights and AI-powered predictions and insights to drive engagement and focus directly in the flow of work in Salesforce. This license also includes AI features, which are not in use at this time. Expected benefits: NSD procured CRM Analytics Plus licenses for the use of Tableau for reporting. | Drive outcomes at scale and get answers to inform. | Drive outcomes at scale and get answers to inform. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0184 | Enhanced Proactive Financial Analysis Techniques for Fusion Desktop | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Deploys predictive analytics to enhance recognition of, and highlighting of, fraudulent financial activity across HSTF NCC financial and criminal holdings. | Improve efficiencies by identifying and prioritizing financial fraud activity tied to open criminal investigations by including a broader data set and across multiple judicial districts to develop a better whole-of-US picture of financial fraud. | List of subjects with open criminal investigations which would be worth additional manual review for financial ties. | List of subjects with open criminal investigations which would be worth additional manual review for financial ties. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0185 | FinCEN Data Summarization | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Generative AI | Analyze a SAR through generative AI functionality | Speed up production times for providing investigative support to the field. It also makes analysts more available to perform deeper analysis. | Summarization of applicable FinCEN suspicious activity reports (SARs). | Summarization of applicable FinCEN suspicious activity reports (SARs). | |||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0186 | Generation of Graphs and Charts | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Create charts and graphs by describing the requested data to integrate. | Charts and graphs aide in reporting of HSTF investigations. Enabling this technology will reduce tremendously reduce the time which was used for formatting traditional Microsoft products. | Recommendation Provide sample data to create charts and graphs to populate | Recommendation Provide sample data to create charts and graphs to populate | ||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0187 | Generation of Large Test Data Sets | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Using generative AI modeling, we will create large volumes of test data so that quality assurance, external audits and penetration testing, system demos & integration, etc., can use that generated test data at scale. | Developing and operating with a test set of data will enable engagements between HSTF and external groups, where manual generation of test data at scale is not feasible. | Test Data Sample and/or mock up of test data from the applications used within the environment | Test Data Sample and/or mock up of test data from the applications used within the environment | ||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0188 | Machine Learning for Decision Support for Fusion Desktop | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | To take into account various criteria such as analyst workloads, similarities in case narratives and targets, and regional factors to suggest the optimal routing and assignment for product requests. | Improve efficiencies in routing request for HSTF Products to send to the analysts and/or analytic units most suitable for working the product. Reduce rework and streamline the approvals process by suggesting agency approvers based on automated review of the product's content and ensuring all appropriate agencies are consulted for manual review. | Suggested assignment / routing for HSTF NCC Product Requests (e.g. to an analytic unit or specific analyst). Suggested approver assignments based on referenced agency data. | Suggested assignment / routing for HSTF NCC Product Requests (e.g. to an analytic unit or specific analyst). Suggested approver assignments based on referenced agency data. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0189 | Machine Learning for Decision Support for DOJ MIS | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | Identify proposed HSTF investigations for prioritized review, approval, and potential case designation based on pre-determined success criteria. | Improve efficiencies in routing HSTF proposals through the workflow to receive HSTF designation. Identify the best use of resources through machine learning trained on investigations matching emerging threats. | Lists of proposed HSTF investigations in prioritized order. | Lists of proposed HSTF investigations in prioritized order. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0190 | Narrative Analysis for Fusion Desktop | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Will create summaries or precis from large volumes of text narratives submitted on Fusions Desktop forms, for both information triage and trend analysis / emerging threat detection purposes. Gen AI would also allow interactive queries (e.g. you ask it questions) about the data it is summarized, and would be transparent (with citations of underlying data as needed). | Provide rapid review of information submitted as data ingested into the Fusion Desktop database. This will reduce the time necessary during data entry and review and will provide guidance on emerging threat areas which HSTF NCC may use to determine resource allocation or areas for additional study. | Summary of long narrative text (on a per form basis). Summary of trends common across multiple forms' narrative texts. | Summary of long narrative text (on a per form basis). Summary of trends common across multiple forms' narrative texts. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0191 | Narrative Analysis for DOJ MIS | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Will create summaries or precis from large volumes of text narratives submitted on HSTF forms, for both information triage and trend analysis / emerging threat detection purposes. Gen AI would also allow interactive queries (e.g. you ask it questions) about the data it has summarized, and would be transparent (with citations of underlying data as needed). | Provide rapid review of information submitted on DOJ MIS forms, including Investigation Initiation Forms (IIFs) and interim and final updates on open investigations. This will reduce the time necessary during data entry and review for forms and will provide guidance on emerging threat areas which HSTF NCC may use to determine resource allocation or areas for additional study. | Summary of long narrative text (on a per form basis). Summary of trends common across multiple forms' narrative texts. | Summary of long narrative text (on a per form basis). Summary of trends common across multiple forms' narrative texts. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0192 | Train ML on Intelligence Analyst Best Practices | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | Train AI on exemplary analyst work to suggest course of action to all IAs while they perform product research and development. | Improve efficiencies in developing HSTF NCC Products by providing decision support and limited automation by ML trained on exemplar analysts at the NCC. Provides guidance and suggestions particularly for resource-intensive actions, like conducting open source, commercial, or offline (swivel-chair) searches of data sets outside the NCC. | Suggested actions at each phase of HSTF NCC Product development workflow. | Suggested actions at each phase of HSTF NCC Product development workflow. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / OIG | DOJ-0193 | Speech to Text Managed Service - Voice Transcription to Text | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Reducing turnaround times for transcribed services and to reduce the need for contracted transcription services of audio files | Cost savings towards less time and materials for an individual to fully process the recording and quicker turnaround of transcription delivery. | Speech-to-text recognition of spoken content within an audio file. In addition, a summary of the transcribed content can be created. | c) Developed with both contracting and in-house resources | Microsoft, OpenAI | Yes | Speech-to-text recognition of spoken content within an audio file. In addition, a summary of the transcribed content can be created. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / OIG | DOJ-0194 | Inspection Productivity Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | a) High-impact | High-impact | Generative AI | Reduce the time involved in analyzing draft reports and content recommendations in compliance with published standards, guidance, and rule books. | Quicker turn around for creating reports, analysis of reports to ensure consistency in content and removal of redundancy, and to assist in the management of report length. | Suggested phrases to reword a paragraph and/or identification of errors in statements as they align towards defined standards, guidance, or rule books. | Suggested phrases to reword a paragraph and/or identification of errors in statements as they align towards defined standards, guidance, or rule books. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / OIG | DOJ-0195 | Internal Component-specific chatbot service | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Provide human-like conversational responses to conduct general information queries and suggestions towards improving text without feeding into a commercially exposed AI model. | Improve work efficiencies and curve user generative AI usage to an environment that is controlled and contained within GCCH and the DOJ OIG subscription vice a commercial service. Furthermore, the information and models generated within the GCCH environment stays within GCCH and do not further the model of commercial products or services. | The AI provides generative AI human-like responses/answers to questions and/or statements from a user. In addition, the output can be a generated summary of an inputted excerpt or publicly available information. | a) Purchased from a vendor | Microsoft, OpenAI | Yes | The AI provides generative AI human-like responses/answers to questions and/or statements from a user. In addition, the output can be a generated summary of an inputted excerpt or publicly available information. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / OIG | DOJ-0196 | Dragon | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Automated word processing dictation and speech transcription. The application is speech recognition software that initiates commands on a device or takes dictation into a word processing application. The application assists individuals with reasonable accommodations related to and/or difficulty typing, seeing, or navigating a Windows operating system environment. | Provide reasonable accommodations related to and/or difficulty with typing, seeing, or navigating a Windows operating system environment to be productive throughout the workday. | Transcription of dictated words into a word processing document or initiation of macro commands in the Windows operating system environment. | a) Purchased from a vendor | Nuance Communications (owned by Microsoft) | Yes | Transcription of dictated words into a word processing document or initiation of macro commands in the Windows operating system environment. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progess | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||
| Department Of Justice | Department of Justice / OIG | DOJ-0197 | Informatica - CLAIRE | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Manual scanning and cataloging of data sets related to data analytics' mass data correction, business rules translation, data/column similarity, data anomaly detection, data relationship inference, data domain inference, data volume projections, cost of data breach, natural language description of code, business term associations, schema mapping, entity extraction, smart data visualization, and economic value of data. | Informatica CLAIRE engine will help with cataloging enterprise data quickly, classify and organize OIG data. It will also automate the data curation and connect data across OIG from disparate sources. It will also track data movement from system views to column-level lineage. | Provide machine learning-based discovery to scan and catalog data assets across the OIG. Enterprise Data Catalog provides intelligence by leveraging metadata to deliver recommendations, suggestions, and automation of data management tasks. | a) Purchased from a vendor | Informatica | Yes | Provide machine learning-based discovery to scan and catalog data assets across the OIG. Enterprise Data Catalog provides intelligence by leveraging metadata to deliver recommendations, suggestions, and automation of data management tasks. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / OIG | DOJ-0198 | Nlets Nationwide License Plate Reader Pointer Index | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Law enforcement usage of Optical Character Recognition (OCR) to assist with license plate reading under DOJ Policy. LPR data is managed and maintained by other entities for law enforcement purposes. | Quick notification of results from license plate queries. | JWIN facilitates access to the Nlets Nationwide License Plate Reader Pointer Index, which provides access to states and/or federal agencies that maintain their own LPR repositories. LPR data is managed and maintained by other entities for law enforcement purposes. | a) Purchased from a vendor | Thomson Reuters | Yes | JWIN facilitates access to the Nlets Nationwide License Plate Reader Pointer Index, which provides access to states and/or federal agencies that maintain their own LPR repositories. LPR data is managed and maintained by other entities for law enforcement purposes. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / OIG | DOJ-0199 | Axon | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | No AI utilized. But, the use case captures raw video and audio footage that are accessed only to the extent needed for prosecution or investigation purposes. AI functionalities for analysis, voice transcription, and redaction are not utilized or relevant for our use. | This use case does not utilize AI functions. AI features are included in the product, but have not been installed and not relevant for our use. | No AI system outputs. Standard outputs are archival raw videos and audio footage. | a) Purchased from a vendor | Axon | Yes | No AI system outputs. Standard outputs are archival raw videos and audio footage. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / OIG | DOJ-0200 | SAS Enterprise Miner - Grant risk assessment model | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / OJP | DOJ-0201 | MACO Project: Law Enforcement CAD Data Autocoder | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | MACO is intended to solve the lack of standardization in Computer-Aided Dispatch (CAD) data across law enforcement agencies. CAD event descriptions are free-text, highly variable, and differ widely in terminology, structure, and coding practices. Because of this inconsistency, it is currently not possible to aggregate, compare, or analyze CAD data across jurisdictions at scale. MACO uses machine learning and language models to automatically classify raw CAD text into a standardized event taxonomy, enabling consistent analysis, cross-agency comparisons, and the development of national estimates of calls for service. | The AI provides standardized classifications of CAD event text, allowing BJS and partner agencies to analyze police activity consistently across jurisdictions. For BJS, this enables the production of scalable, comparable national estimates of calls for service—filling a major data gap not addressed by traditional crime measures. For state and local agencies, the standardized schema improves internal organization of CAD data and supports regional or state-level comparisons of police workload and community needs. For the research community and the public, MACO expands understanding of how law enforcement resources are used, the types of events agencies respond to, and broader patterns of community demand for police services. Overall, the tool enhances data quality, improves analytic capacity, and supports evidence-based decision-making across the criminal justice ecosystem. | The system outputs a standardized event-type classification for each CAD record. For each raw text description, the model generates a predicted category from a predefined event taxonomy (e.g., “Property Crime: Theft,” “Traffic Incident,” “Disturbance,” etc.). The final deliverable is a CAD dataset with these standardized classifications appended to each record, enabling consistent analysis and aggregation across agencies. | The system outputs a standardized event-type classification for each CAD record. For each raw text description, the model generates a predicted category from a predefined event taxonomy (e.g., “Property Crime: Theft,” “Traffic Incident,” “Disturbance,” etc.). The final deliverable is a CAD dataset with these standardized classifications appended to each record, enabling consistent analysis and aggregation across agencies. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / OJP | DOJ-0203 | Offense Text Auto-Coder (OTAC) - Automated offense coder from offense charge text strings used for the BJS National Pretrial Reporting Program | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | The purpose of this tool is to improve description and comparability of offense charges across jurisdictions. When using justice administrative data from various jurisdictions (localities, states, and federal), the way offenses are described (i.e., the exact text strings used) vary greatly. For example, assault and battery may be spelled out, or abbreviated in novel ways such as A&B, A & B, A+B, A?B, battery & aslt, etc. This tool is used to facilitate grouping identical concepts under one common set of offense codes. The data that BJS makes available will be aggregated or deidentified. | Improved comparisons of criminal justice data | The output of this autocoder is common definitions for offense charge classification. It has been trained on an offense crosswalk for the BJS National Corrections Reporting Program (NCRP) (https://www.icpsr.umich.edu/files/NACJD/ncrp/Offense_Code_Crosswalk.xlsx) to convert plain-text offense descriptions into classifications routinely used by the NCRP. | b) Developed in-house | Yes | The output of this autocoder is common definitions for offense charge classification. It has been trained on an offense crosswalk for the BJS National Corrections Reporting Program (NCRP) (https://www.icpsr.umich.edu/files/NACJD/ncrp/Offense_Code_Crosswalk.xlsx) to convert plain-text offense descriptions into classifications routinely used by the NCRP. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / OJP | DOJ-0205 | Rapid Offense Text Autocoder (ROTA) - Automated offense coder from offense charge text strings used for the BJS Criminal Cases in State Courts | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | The purpose of this tool is to improve description and comparability of offense charges across jurisdictions. When using justice administrative data from various jurisdictions (localities, states, and federal), the way offenses are described (i.e., the exact text strings used) vary greatly. For example, assault and battery may be spelled out, or abbreviated in novel ways such as A&B, A & B, A+B, A?B, battery & aslt, etc. This tool is used to facilitate grouping identical concepts under one common set of offense codes. The data that BJS makes available will be aggregated. | Improved comparisons and analysis of criminal justice data. | The output of this autocoder is common definitions for offense charge classification. It has been trained on an offense crosswalk for the BJS National Corrections Reporting Program (NCRP) (https://www.icpsr.umich.edu/files/NACJD/ncrp/Offense_Code_Crosswalk.xlsx) to convert plain-text offense descriptions into classifications routinely used by the NCRP. | b) Developed in-house | No | The output of this autocoder is common definitions for offense charge classification. It has been trained on an offense crosswalk for the BJS National Corrections Reporting Program (NCRP) (https://www.icpsr.umich.edu/files/NACJD/ncrp/Offense_Code_Crosswalk.xlsx) to convert plain-text offense descriptions into classifications routinely used by the NCRP. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / OJP | DOJ-0206 | Research Abstract Screening for CrimeSolutions | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / PAO | DOJ-0208 | Adobe | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | DOJ needs to communicate effectively internally and externally. This tool, can support image production for communication purposes. | Adobe has function that enables the generation of images. PAO does not use this function/feature. | Images, if feature were in use | Images, if feature were in use | ||||||||||||||||||||
| Department Of Justice | Department of Justice / PAO | DOJ-0210 | Hootsuite | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Optimizes social media posting | PAO uses Hootsuite to schedule and manage social media products. Hootsuite also includes AI-powered social listening features, but PAO does not use those features. It can advise on when the best time to post would be for maximum engagement, which will help promote our message. | Recommendations about date and time to publish content | a) Purchased from a vendor | Hootsuite | Yes | Recommendations about date and time to publish content | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / PAO | DOJ-0212 | Veritone Digital Media Hub | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Expedites the identification and labeling of photos. | The Veritone Digital Media Hub includes AI features that allow PAO to search our event photo databases and identify objects in those photo catalogs. Increased efficiency of searches of archival images so that PAO can find and continue using assets. | Recommendations of images that match the terms searched for | a) Purchased from a vendor | Veritone | Yes | Recommendations of images that match the terms searched for | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / PARDON | DOJ-0213 | Lexis Nexis (People Search) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Improve the accuracy and comprehensiveness of a clemency applicant's personal data. | Allows the retrieval of personal data for an individual such as historical addresses. Improve the accuracy and comprehensiveness of a clemency applicant's personal data. | Summary | a) Purchased from a vendor | Lexis Nexis | Yes | Summary | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / PARDON | DOJ-0215 | Pacer Search | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | Validate identity | The service provides more confidence that the correct person has been identified | PII about the individual. | a) Purchased from a vendor | Techsmith | No | PII about the individual. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / PARDON | DOJ-0217 | Westlaw (People Search) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Improve the accuracy and comprehensiveness of a clemency applicant's personal data. | Allows the retrieval of personal data for an individual such as historical addresses. Improve the accuracy and comprehensiveness of a clemency applicant's personal data. | Summary | a) Purchased from a vendor | Westlaw | Yes | Summary | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0220 | AWS Transcribe | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Creates machine transcription of audio or video to facilitate review and evaluation of evidence. | Machine transcription facilitates faster review of data, decreasing time spent listening to and evaluating audio and video files. Saves funds that would otherwise be spent on transcription vendors. | Translated text | a) Purchased from a vendor | Amazon Web Services | Yes | Translated text | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0221 | AWS Translate | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Creates machine translation of foreign language documents to facilitate review and evaluation of evidence. | Machine translation facilitates faster review of foreign language documents. Permits selection of key documents to be sent to vendors for evaluation and translation, speeding up review considerably and saving funds that would otherwise be spent on translation vendors. | Machine translated text | a) Purchased from a vendor | Amazon Web Services | Yes | Machine translated text | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0222 | JAWS (Text-to-Speech Assistant for Accessibility) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | Visually impaired personnel need assistance interpreting pictures and other visual objects and interacting with documents in a non-linear way | Aids visually impaired personnel with documents. | Audio descriptions of images and summaries of text | a) Purchased from a vendor | Freedom Scientific Inc. | No | Audio descriptions of images and summaries of text | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0223 | Trial Presentation Software | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Assists in presenting documents and video in a courtroom setting | More effective courtroom advocacy; decreased time spent assembling presentations | Courtroom presentations | a) Purchased from a vendor | OnCue Technology, LLC | Yes | Courtroom presentations | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0224 | Deposition Transcript Management | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Assists with marking up deposition transcripts and video | More efficient organization, annotation, and display of transcripts and deposition video | Annotated deposition video and transcripts | a) Purchased from a vendor | LexisNexis | No | Annotated deposition video and transcripts | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0225 | Axon Video Retention Solution (VRS) - object recognition and redaction | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The Object Recognition AI capabilities present in the Axon Evidence Redaction suite creates first-pass results for potential desired redactions of individual's faces, license plates and computer terminals. The AI is intended to reduce manual work effort and reduce the time it takes to redact video and audio footage. | This product allows for the protection of identities of Law Enforcement Officers and the public. The only subjects not blurred/redacted are of the target(s) of the arrest. In addition the expected benefits for using the object recognition capability includes reduced USMS personnel time required to produce redacted footage and obviate the need to procure outside redaction services. | Draft video file with USMS-selected desired redaction areas (faces, license plates or computer terminals) Note: Redacted file is not complete until human intervention validates and/or corrects AI suggestions. | a) Purchased from a vendor | Axon Enterprise, Inc. | Yes | Draft video file with USMS-selected desired redaction areas (faces, license plates or computer terminals) Note: Redacted file is not complete until human intervention validates and/or corrects AI suggestions. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0226 | Facial Recognition Technology | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | Facial Recognition Technology helps to narrow down potential subjects for further investigative analysis. The AI assists with the possible identification of an investigative subject but the AI in this use case is only an investigative lead and never grounds for law enforcement actions. All leads generated with this AI use must be corroborated with additional law enforcement techniques before actioned. | An increase in investigative efficiency leading to faster apprehension of violent fugitives and sex offenders and more rapid recovery of critically missing children. | Matches query photograph with publicly available images. | a) Purchased from a vendor | Clearview AI | Yes | Matches query photograph with publicly available images. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0227 | JMIS: JARS | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Increases efficiently in movement of prisoners, less manual labor. | cost savings, labor savings | Suggested scheduled prisoner movements based on previous successful movements | b) Developed in-house | Yes | Suggested scheduled prisoner movements based on previous successful movements | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0228 | JMIS: Route Optimizer | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | More accurate optimization of flight schedule | Evaluates over 2000 possible flight options and selects the most optimal flight. Improves efficiency of JPATS through the optimal use of flight assets. | Proposed flight schedule | c) Developed with both contracting and in-house resources | Yes | Proposed flight schedule | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0229 | UiPath OCR activity | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Errors and delays in manual data interpretation and data entry impact critical events in customer journeys and business process efficiency. UiPath states that it is an automation software that has intelligent document creating capabilities to replace manual processes before and after the reading of the flat file. Some of its stated features are extracting text and allowing entire workflows to take place in a single application with one application license. | Automating data extraction from various documents and images, thereby increasing efficiency, accuracy, and speed in processes that involve manual data entry and document processing. | Extract and interpret data from a wide range of document types and formats, including images, PDFs, handwriting, signatures, checkboxes, and tables. It is designed to process documents intelligently, using a combination of rules, templates, and specialized or generative language models. | Extract and interpret data from a wide range of document types and formats, including images, PDFs, handwriting, signatures, checkboxes, and tables. It is designed to process documents intelligently, using a combination of rules, templates, and specialized or generative language models. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0230 | Unmanned Aerial Systems (UAS) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Agentic AI | Collision avoidance | The AI predicts the movement of obstacles and subjects to plot a safe and efficient flight path. This allows the drone to anticipate and smoothly maneuver around objects instead of simply reacting to them. Skydio drones use AI to power their core autonomy features, enabling them to fly themselves safely and intelligently while a human operator focuses on the mission. | Skydio's AI output systems provide automated, real-time data capture and modeling for complex environments by combining advanced onboard AI and computer vision with high-resolution cameras | a) Purchased from a vendor | Skydio | Yes | Skydio's AI output systems provide automated, real-time data capture and modeling for complex environments by combining advanced onboard AI and computer vision with high-resolution cameras | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0231 | Video Transcription Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | USMS uses open source natural language processing technologies and python code to transcribe an audio or video file into plain text. | The product allows analysts to quickly convert a video or audio file to text. | Transcription NLP algorithm | b) Developed in-house | Yes | Transcription NLP algorithm | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0232 | Axon Video Retention Solution (VRS) - Transcription | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Computer Vision | The Transcription AI capabilities present in the Axon Evidence Digital Evidence Management System creates first-pass results for transcription of spoken language in video and audio files into text. The AI is intended to reduce manual work effort and reduce the time it takes to transcribe words spoken in video and audio files into typed text. | The expected benefits for using the transcription capability includes reduced USMS personnel time required to manually transcribe words spoken in video and audio files into typed text and obviate the need to procure outside transcription services. | Draft transcribed text associated to video/audio file being transcribed. Note: Transcription is not complete until human intervention validates and/or corrects AI suggestions. | a) Purchased from a vendor | Axon Enterprise, Inc. | Yes | Draft transcribed text associated to video/audio file being transcribed. Note: Transcription is not complete until human intervention validates and/or corrects AI suggestions. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0233 | Open Source Investigative Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | USMS uses this tool for authorized investigative lead generation which will improve efficiencies of public safety and law enforcement missions. | More efficient screening of leads for potential investigative actions. | Recommendation | a) Purchased from a vendor | Vendor proprietary | Yes | Recommendation | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0234 | JPATS Mobile App (Biometrics) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Transportation | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Quicker identification of prisoner record from mobile manifest | Time and labor savings resulting in cost savings | Manifest record of prisoner being moved | c) Developed with both contracting and in-house resources | Rank One Computing | Yes | Manifest record of prisoner being moved | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / USNCB | DOJ-0235 | Aware | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | This commercial tool allows USNCB to ingest and process biometric information shared by domestic and international partners. | Empowers USNCB to automate processing biometric data, improving the speed with which information is shared with partners. | Boarding passes, Ticket changes, Tools for managing changes in travel | a) Purchased from a vendor | Aware Technologies | Yes | Boarding passes, Ticket changes, Tools for managing changes in travel | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / USNCB | DOJ-0236 | Language Weaver | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | a) High-impact | High-impact | |||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / USNCB | DOJ-0239 | Thomson Reuters CLEAR | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Access to large data sets used for locating persons of interest. | Timely access to key data in the pursuit of persons of interest | Data | a) Purchased from a vendor | Thomson Reuters | Yes | Data | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / USTP | DOJ-0240 | USTP AI Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Intended purpose is to enable USTP's IT team to understand the process to develop, test, tune and use Microsoft various AI services--Azure Open AI, Azure Open AI Foundry, Microsoft Copilot Studio. A secondary purpose is understanding the full costs and time to develop and implement a use case from start to finish. A tertiary purpose is to ensure the Generative AI service is providing useful responses and value given the cost and time to setup and deploy. | Intended benefits of using Microsoft AI services would be to enable USTP staff to find information quickly on relevant questions they may have. The use cases tested in Pre-Deployment phase would help reduce Help Desk calls and increase productivity of users enabling them to find technical information quickly. Potential to help generate new content based on pilot testing. | Given the outputs range due to the specific AI Assistants objective USTP will provide a couple of examples that are in Pre-Deployment Technical Feasibility testing now: 1). HR Assistant - Trained on USTP's SharePoint Intranet pages to answer common questions about various human resources support issues. 2). Briefing Assistant - Review public documents that were previous submitted USTP briefs on specific bankruptcy cases to easily search and find these cases. | Given the outputs range due to the specific AI Assistants objective USTP will provide a couple of examples that are in Pre-Deployment Technical Feasibility testing now: 1). HR Assistant - Trained on USTP's SharePoint Intranet pages to answer common questions about various human resources support issues. 2). Briefing Assistant - Review public documents that were previous submitted USTP briefs on specific bankruptcy cases to easily search and find these cases. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0242 | Writing assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Quality and consistency of written comments | Faster FBI operations | Text | Text | ||||||||||||||||||||
| Department Of Justice | Department of Justice / PARDON | DOJ-0243 | Voicemail Transcription, Translation and Summarization | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | a) High-impact | High-impact | Generative AI | PARDON would like to leverage transcription, translation and summarization services available in the GCC high cloud environment to help reduce the processing time and level of effort associated with responding to voicemail inquiries. | Cost savings, reducing customer wait times, improving customer experiences, improving PARDON Attorney experiences, and improving multi-lingual access to the government. | The primary output from this use case is an email that provides an AI generated summary of the voicemail and includes the following attachments: the original voicemail (.wav file), a text file that includes the transcribed voicemail, and a text file that includes the translated voicemail (for non-English voicemails). This email is sent to the shared PARDON inbox and processed with other email requests. | a) Purchased from a vendor | Amazon | Yes | The primary output from this use case is an email that provides an AI generated summary of the voicemail and includes the following attachments: the original voicemail (.wav file), a text file that includes the transcribed voicemail, and a text file that includes the translated voicemail (for non-English voicemails). This email is sent to the shared PARDON inbox and processed with other email requests. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / Department wide | DOJ-0244 | Veritone | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Improve transcription and translation of written, video, and audio files | Veritone is used to translate and transcribe non-English language audio for attorney review. This machine translation does not constitute an official record, but is a tool to allow initial review of the audio by the attorney. Veritone is also used to provide English language translations of large sets of documents containing non-English text in order to get an initial idea of the document contents and do not replace official translation of evidence. | For text files that are transcribed into English the output contains the translated text. For Audio and video files a text file transcribing the translation is created as well as a closed captioning file. | a) Purchased from a vendor | Veritone | Yes | For text files that are transcribed into English the output contains the translated text. For Audio and video files a text file transcribing the translation is created as well as a closed captioning file. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0245 | UiPath Document Understanding | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Reduce errors and delays in manual data interpretation and data entry. Potential for significant time savings, additional security and reliability since staff are not working through multiple applications to get the same task done. | Use case helps improve customer experience. It simplifies the processing of complex, unstructured data, expediting decision-making processing, onboarding, and servicing. Automating document processing also reduces the risk of errors. By mitigating the risk of human error, data input errors, missed information, and incorrect procedures are less likely to occur. The result is improved compliance, reduced time people spend on rework, and less losses for the agency. | Extract and interpret data from a wide range of document types and formats, including images, PDFs, handwriting, signatures, checkboxes, and tables. | Extract and interpret data from a wide range of document types and formats, including images, PDFs, handwriting, signatures, checkboxes, and tables. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD | DOJ-0246 | UFMS ChatBot | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Create a personal assistant to provide application-level support to Unified Financial Management System (UFMS) users based on their functional needs/tasks. | Reduce the need for system users to do manual research, reducing subsequent tier 1 help desk requests. | Formatted text response to specific user questions. | Formatted text response to specific user questions. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0247 | Translation tool | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Provide USMS with the tools that can be integrated with USMS and USNCB data stores such as email content and other files for multiple language translated to English and vice versa. | The proposed tools are more cost efficient than other software based solutions. In addition, the tool provides significantly more languages beyond those purchased with previous tools. | Translated text | Translated text | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0248 | Transcription | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Time intensive manual transcription | Faster FBI operations | Form with transcribed text | Form with transcribed text | ||||||||||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0249 | Table of Contents / Table of Authorities Word plugin | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Efficient generation of tables of contents and tables of authorities | Greatly reduces attorney and support staff time spent on generating tables | Tables of contents and authorities in draft briefs | a) Purchased from a vendor | Levit & James, acquired by Litera | No | Tables of contents and authorities in draft briefs | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / CRM | DOJ-0250 | Systran Translate Server | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | CRM uses Systran Translate Server for machine translation. | Translate data from many languages to allow for review and investigation. Systran leverages NLP research with human expertise to train and evaluate models. | Translation | a) Purchased from a vendor | Systran | Yes | Translation | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0251 | System Performance Monitoring | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Improve monitoring of health of IT and litigation support systems. Better anticipation of outages or slowdowns | Fewer and shorter IT outages due to faster responses times and better anticipation of problems | Machine learning text, images, and diagrams | Machine learning text, images, and diagrams | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0252 | Synthetic data generation for software testing | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Synthetic data generation for software testing | Faster FBI operations | test data | test data | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0253 | Symphony AD-Hoc Batch Processing | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Translation, transcription, and summarization tool for language processing | Faster FBI operations | Transcribed and translated language | b) Developed in-house | Yes | Transcribed and translated language | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0254 | Summarizing Inspection Actions and Results for Future Inspectors | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | The Office of Inspection (IN) is leveraging the Business Improvement Section (ACB) to automate a large number of their work processes. | Distilling large data sets, interpreting graphs, writing final reports, and summarizing results is time consuming. An AI prompt can assist in creation of a rough draft in significantly less time as opposed to the time and effort commitment from DEA's human capital resources. Cost savings could be achieved from reduced full-time equivalents spent on generating end user products. In addition, the turnaround time involved with many inspection result findings work processes would decrease. | A rough draft of inspection results would be generated for review and approval. | b) Developed in-house | No | A rough draft of inspection results would be generated for review and approval. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0255 | Summarization Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Summarization | Faster FBI operations | Data summarization | a) Purchased from a vendor | No | Data summarization | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0256 | Stream Processing & Analytics | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Stream processing framework for complex analytics | We have identified a number of open-source tools that we believe could assist us with graphing relationships and other forms of data visualization. | Machine learning text, images, and diagrams | Machine learning text, images, and diagrams | ||||||||||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0257 | Storyblocks | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0258 | Smartphone and tablet operating systems and features | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | DOJ's time sensitive mission benefits from optimizing use of its smart phones and tablets. It also need to update operating systems for cybersecurity features. | Supports DOJ personnel to better serve the American public, especially when not directly utilizing their DOJ-issued computers. | Optimized performance and functionality of approved capabilities on DOJ-issued smart phones or tablet devices. | a) Purchased from a vendor | No | Optimized performance and functionality of approved capabilities on DOJ-issued smart phones or tablet devices. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0259 | Sentiment Analysis Tool | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Human Resources | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Improved marketing of FBI jobs | improved FBI recruitment and hiring | Aggregated trends and patterns | a) Purchased from a vendor | No | Aggregated trends and patterns | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0260 | Search Tool for Prioritization | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Search result prioritization | Faster FBI operations | Prioritized search results | Prioritized search results | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0261 | Search Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Better search results | Saving time resulting in faster FBI response | Search results | a) Purchased from a vendor | AI Service Provider | No | Search results | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0262 | Redaction Tool 2 | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Time-intensive manual tasks | Faster FBI operations | Suggested redactions | Suggested redactions | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0263 | Redaction Tool 1 | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Time-intensive manual tasks | Faster FBI operations | Suggested redactions | c) Developed with both contracting and in-house resources | Yes | Suggested redactions | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / EOIR | DOJ-0264 | Record Digitization | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | a) High-impact | High-impact | Computer Vision | Many agency records, including immigration case files, are paper records. EOIR must digitize paper records into electronic records to comply with federal laws requiring the agency's transition to digital processes. Parties to EOIR immigration proceedings experience delays in accessing case-related information when case records are maintained in paper format. EOIR must make voluminous copies of paper records to respond to records requests or otherwise spend time scanning paper records for digital transmission. EOIR has limited storage space available for paper records. | Transition many components of the case adjudication process to a more efficient, primarily digital process. Improve access to case information for parties to EOIR immigration proceedings. Improve record request and response processes. Eliminate need for costly physical space to store paper records. | Digital agency records of sufficient authenticity, reliability, usability, and integrity to replace the original paper record. | Digital agency records of sufficient authenticity, reliability, usability, and integrity to replace the original paper record. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / Department wide | DOJ-0265 | Public Comment Analysis | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | DOJ strives to better serve the American public. For support timely responses following public comments submissions, especially for regulatory missions, this capability will facilitate efficient processing of duplicate and similar comments, while helping categorize, cite, and map public comments. | This tool improves text analysis and the timeliness of such analysis. Importantly, the tool contains technology to quickly identify, organize, and address high-volume public comments, including letter submissions. It supports the development of dashboards to provide metrics. | Data results, comparisons, and analyses for DOJ personnel to review and assess. | a) Purchased from a vendor | Docketscope | No | Data results, comparisons, and analyses for DOJ personnel to review and assess. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / PRAO | DOJ-0266 | ProLaw | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | ProLaw assists PRAO attorneys in searching a large quantity of prior PRAO inquiries and advice (stored within the database) to identify relevant historical inquiry files that will assist the attorneys in determining how PRAO has advised on similar matters in the past. | The benefits of ProLaw is that it enables PRAO to store digitally, consistent with the component's records retention schedule, all PRAO inquiry files and then quickly search large quantities of inquiry files to identify relevant historical inquiries and advice to a current matter a PRAO attorney is working on. This greatly reduces the amount of time it takes PRAO staff to research which then reduces the wait time of the Department attorney who has requested PRAO advice. Because ProLaw allows PRAO to store records digitally, it also provides government cost savings in the amount of money paid to store hard copy records. | ProLaw's output is information. Specifically, PRAO uses ProLaw to identify all of the digital inquiry files in the database that are consistent with the user's selected search query. | a) Purchased from a vendor | Thomson Reuters | No | ProLaw's output is information. Specifically, PRAO uses ProLaw to identify all of the digital inquiry files in the database that are consistent with the user's selected search query. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0267 | Procurement Data Triage Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | To assist analysts in manual review | More efficient and comprehensive procurement decisions | Data triage | a) Purchased from a vendor | AI Model Provider | No | Data triage | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0268 | Administrative Chatbot | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Time intensive responses to questions | Faster FBI operations | Answers to questions | Answers to questions | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0269 | Policy Chatbot | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Chatbot for policy | Faster FBI operations | Location of user manuals and documentation | a) Purchased from a vendor | AI Model Provider | Yes | Location of user manuals and documentation | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / ATF | DOJ-0270 | PLX | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Natural Language Processing (NLP) | Forensic analysis of data from multiple electronic investigation sources, including mobile phones, computers, and warrant returns, within the context of criminal investigations. | Increases the efficiency of identifying pertinent information within the context of criminal investigations. | Notifications of potential entity matches for review. Link analysis visualizations. | a) Purchased from a vendor | PenLink | No | Notifications of potential entity matches for review. Link analysis visualizations. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0271 | Pega GenAI | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | GenAI is a tool that can help developers generate workflows (code in Pega) faster | Increased developer throughput on the Pega Platform | It generates software that is unique to the Pega program. | It generates software that is unique to the Pega program. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / EOUSA | DOJ-0272 | Palantir | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Generative AI | Integration and analysis of case information. | Reduction in time required to update and maintain an accurate case management system. | Reports, narratives, and summaries | a) Purchased from a vendor | Palantir | No | Reports, narratives, and summaries | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0273 | Optical Character Recognition Tool 3 | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Procurement & Financial Management | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Automating administrative tasks | Faster FBI operations | Text data | a) Purchased from a vendor | Yes | Text data | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0274 | Optical Character Recognition Tool 1 | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0275 | Optical Character Recognition Tool 2 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Digitization of data | Faster FBI operations | Text | c) Developed with both contracting and in-house resources | No | Text | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0276 | Object Detection Tool 2 | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Data triage | Faster FBI operations | Investigative leads | Investigative leads | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0277 | Object Detection Tool 1 | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Identifying if there is a barrier to biometric matching | Better biometric matching | Probability score for presence of a barrier | a) Purchased from a vendor | Yes | Probability score for presence of a barrier | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0278 | NIST Compliance Recommender | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Information Technology | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Manual assessments of NIST guidelines | Faster FBI operations | Reports and data | b) Developed in-house | No | Reports and data | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / OPR | DOJ-0279 | NetDocuments | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | The AI functionality of NetDocuments consists of predictive suggestions for saving files, such as Word documents and Outlook emails. Through machine learning, NetDocuments predicts the OPR matter into which it thinks the end user should save a specific file. The AI feature is intended to make the process of saving files into the pertinent matter number more efficient and streamlined. | Because NetDocuments is OPR's repository of records, it is important that OPR staff save all matter-related files into NetDocuments associated with the correct OPR matter number. The AI feature in NetDocuments is expected to make the process of saving files into NetDocuments more efficient and user-friendly. That will both encourage end users to save files into NetDocuments and assist in making sure that files are correctly associated with the proper OPR matter numbers. | The AI output from NetDocuments consists of predictive recommendations for the specific OPR matter numbers to which OPR staff should associate files saved into NetDocuments. | a) Purchased from a vendor | Inonde, NetDocuments | Yes | The AI output from NetDocuments consists of predictive recommendations for the specific OPR matter numbers to which OPR staff should associate files saved into NetDocuments. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0280 | Named Entity Recognition | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Entity Extraction | Faster FBI operations | Resolved entities in a searchable index. | b) Developed in-house | Yes | Resolved entities in a searchable index. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / Department Wide | DOJ-0281 | Microsoft Office 365, Teams, and Windows default features | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Finding opportunities to optimally support DOJ personnel through existing Department-wide tools. | These capabilities help DOJ personnel achieve efficiencies through integrated AI assistance across O365 applications in a secure environment, including editorial/grammatical suggestions, data analysis, task automation, and enhanced search. | Improved user experience through qualitative and quantitative suggested improvements, analyses, visualizations. | a) Purchased from a vendor | No | Improved user experience through qualitative and quantitative suggested improvements, analyses, visualizations. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / JMD / OCDETF | DOJ-0282 | Link Analysis and Chart Creation from Narrative Summaries | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Generative AI | Linkages are generally identified manually based on human review of unstructured narratives. This is time consuming. | AI batch review of large groups of data sources to identify linkages, permiting staff to focus their review and analysis. | Recommmendation Sample of narrative to identify and create links in correlated data | Recommmendation Sample of narrative to identify and create links in correlated data | |||||||||||||||||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0283 | Knowledge retrieval and synthesis (Azure OpenAI Services) | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Azure OpenAI will help ATR review large volumes of document-based data quicker and more efficiently. | Enhance the speed and accuracy of legal analysis and review. This technology will allow ATR to quickly distill key information and insights, streamline workflows, reduce manual effort, and expedite legal analysis. | Text generation Answers and insights to critical legal questions Legal summaries Legal citation and document references | a) Purchased from a vendor | Microsoft | Yes | Text generation Answers and insights to critical legal questions Legal summaries Legal citation and document references | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | |||||||||||||
| Department Of Justice | Department of Justice / JMD | DOJ-0284 | Internal Finance ChatBot | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Create a personal assistant to provide application-level support to Workiva users based on their functional needs/tasks. | Reduce the need for system users to do manual research, reducing subsequent tier 1 help desk requests. | Formatted text response to specific user questions. | Formatted text response to specific user questions. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / EOIR | DOJ-0285 | Intelligent Workflow Optimization | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Inefficiencies in agency processes, operations, and workflows. | Recommendations and suggestions to improve various aspects of the workflows, operations, and processes supporting EOIR's mission functions. Improve and optimize EOIR's overall performance of its mission functions. | Recommendations for changing workflows, processes, and operations. | Recommendations for changing workflows, processes, and operations. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / EOIR | DOJ-0286 | Immigration Hearing Transcription and Translation/Interpretation Services | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Generative AI | Language translation/interpretation and hearing transcription services for EOIR immigration proceedings are completed manually, requiring significant money and time costs. Current manual processes are slow and labor-intensive and prolong various stages of immigration proceedings. In-person translators have limited availability to attend immigration proceedings. AI-assisted real-time language translation and AI-assisted transcription of hearings can automate parts or whole processes for EOIR language interpretation and hearing transcription services to optimize resources, time, and costs expended by the agency and the public for EOIR immigration proceedings. AI-assisted translation and transcription can automate processes for transcribing audio recordings of immigration hearings into searchable text and interpreting testimony given in a foreign language in real-time for court staff and parties to proceedings. | AI-assisted transcription may reduce or eliminate steps currently needed for the manual transcription process by creating preliminary drafts of hearing transcripts for manual review and verification. AI-assisted language translation can be completed in real-time and reduce the time to complete manual, simultaneous language interpretation during immigration hearings. EOIR personnel and parties to proceedings can conveniently read real-time translations from the convenience of courtroom computers and monitors. The solution needs to allow contracted interpreters to appear via video remotely, which provides a cost-savings compared to in-person interpretation services, and may reduce instances of inadvertently double-booking interpreters or navigating the interpreter’s availability to travel to different hearing locations, all of which makes the immigration adjudication process more efficient. | Real-time text translation of languages into English during immigration hearings. Preliminary draft transcripts of immigration hearings for manual review to complete. | Real-time text translation of languages into English during immigration hearings. Preliminary draft transcripts of immigration hearings for manual review to complete. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / EOIR | DOJ-0287 | Immigration Case Filing Intake and Processing | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Currently all immigration case filings are manually reviewed for the requisite physical quality (legibility, meets formatting requirements, etc.) before the filing is officially accepted or rejected by EOIR personnel, which prolongs the initial intake and processing of case filings. A large portion of initial intake and processing of case filings could be automated with AI tools only requiring manual review for outputs below a defined threshold. | More efficient review of case filings at intake. Ability to reallocate EOIR administrative personnel to assist with other tasks in the immigration adjudication process. | Automated review of case filings, automated acceptance or rejection of case filings, and recommendations to EOIR personnel to manually review case filings for quality as needed. | Automated review of case filings, automated acceptance or rejection of case filings, and recommendations to EOIR personnel to manually review case filings for quality as needed. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / EOIR | DOJ-0288 | Immigration Case and Filing Content Summary | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Generative AI | EOIR legal support staff and adjudicators review voluminous filings in EOIR immigration proceedings, sometimes ranging into hundreds of pages for a single filing, and parties to proceedings do not organize content clearly or at all, which hinders review and processing by EOIR personnel to adjudicate the case. EOIR legal staff organize, review and categorize documentation submitted, and many of these administrative functions could be automated. In addition, EOIR's legal education and training team spends hours reading and summarizing immigration case law to prepare agency trainings and informational materials. | Technological assistance with research so adjudicators and legal support staff may focus their time and attention on utilizing their decision-making skill sets on legal analysis and drawing legal conclusions in an efficient manner. Decrease time and labor required for processing filings, reviewing filings, categorizing cases, and locating relevant content in filings, which improves the efficiency of the immigration proceedings. Decrease time and labor for reading and preparing immigration law trainings and informational materials. | Summaries of court filing contents with references to the source of information within the summary. Tabbing, labeling, and identifying submission types within voluminous court filings. Information pointers to EOIR adjudicators and legal support staff regarding where specific content most relevant to the adjudicator’s inquiry is located within the record. Summaries of immigration case law and other relevant legal authorities. | Summaries of court filing contents with references to the source of information within the summary. Tabbing, labeling, and identifying submission types within voluminous court filings. Information pointers to EOIR adjudicators and legal support staff regarding where specific content most relevant to the adjudicator’s inquiry is located within the record. Summaries of immigration case law and other relevant legal authorities. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0289 | Image processing | a) Pre-deployment – The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Manual time-intensive image processing | Process automation for faster FBI response | Output of the AI model will be a proposed list of digital image processing steps | Output of the AI model will be a proposed list of digital image processing steps | ||||||||||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0290 | Graph Analytics & Visualization | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Identifies case links and visualizations for complex relationships | Assist with graphing relationships and other forms of data visualization. | Machine learning text, images, and diagrams | Machine learning text, images, and diagrams | ||||||||||||||||||||
| Department Of Justice | Department of Justice / OIG | DOJ-0291 | Grant Risk Assessment Model v3 | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Improved selection of grants to audit | Provide auditors with an additional resource in performing risk assessments that assists in the audit selection process. Allowing auditors to focus work on higher-risk grants can allow for the recovery or redirection of misused government funds and improve auditor effectiveness and efficiency. | Estimated questioned costs and findings for an audit | Estimated questioned costs and findings for an audit | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0292 | Goblin | d) Retired – The use case was reported in the agency’s prior year’s inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | ||||||||||||||||||||||||||
| Department Of Justice | Department of Justice / Department wide | DOJ-0293 | Geospatial Tools | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | DOJ personnel need a way to process, map, visualize, and analyze geographic data to protect the American public, further investigations, and facilitate information exchange with federal, state, local, and foreign partners. | This use case enables DOJ to apply advanced AI/ML capabilities to mission-enabling geographic data in order to enhance data mapping, visualization, and integration. | The system can produce a variety of outputs in standard industry formats (e.g., spreadsheet files, maps, analytic files, database tables, and dynamic applications). | a) Purchased from a vendor | ESRI, ArcGIS | Yes | The system can produce a variety of outputs in standard industry formats (e.g., spreadsheet files, maps, analytic files, database tables, and dynamic applications). | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | |||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0294 | Generating Recommendations, Outlines, and Summaries | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | DEA needs a way to analyze unstructured and structured data to support providing recommendations, outlines, data classification, summaries, and other business operation support. | To accelerate insights and tool development to advance DEA's business operations and better serve the public. | Outputs may include visualizations and tables that support efficiencies in administrative functions. | Outputs may include visualizations and tables that support efficiencies in administrative functions. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / Department wide | DOJ-0295 | FOIA Production Tools | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Enhances and streamlines the processing of Freedom of Information Act (FOIA) requests. Automates tasks such as document classification, intelligent identification and redaction of sensitive or confidential information, and deduplication of documents. | The AI capabilities enable more accurate and efficient organization and retrieval of FOIA-related documents. Additionally, AI optimizes workflows by automating repetitive tasks and minimizing human error, leading to faster processing times and increased operational efficiency. Overall, the incorporation of AI into FOIAXpress is designed to improve the efficiency, accuracy, and compliance of FOIA request processing, enabling staff to respond more promptly, reduce operational costs, and maintain higher standards of transparency and accountability. | Classifications, recommendations, and predictions. | a) Purchased from a vendor | FOIA Xpress, Forum One, Adobe, and Polydelta | No | Classifications, recommendations, and predictions. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0296 | Fingerprint (Friction Ridge) Optical Character Recognition (OCR) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | DEA, when conducting fingerprint analysis to identify individuals who may be connected to evidence, needs to be able to compare friction ridge prints to other prints within the boundaries of a case. Product enables linking of cases where individuals are not necessarily identified. | This use case saves time and provides information for human decision-making. | Outputs images and portions of print cards. | c) Developed with both contracting and in-house resources | No | Outputs images and portions of print cards. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | |||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0297 | Extracting Data from Receipts to Speed Travel Reimbursement or Provide Logbook Documentation | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Gas receipts need to be reconciled / logged into FIRM – the AI would capture the data and can be used to record the receipt (reducing loss of paper) and eventually upload into FIRM reducing human data entry and time. IN also inspects receipts as part of their inspection. The laboratories also audit their OGV logbooks more than once annually. This would be a great pilot to expand into scanning and recording information from other purchases into UFMS and automate that process as well. | Cost savings, live data entry of OGV use (miles and fuel consumption), streamlining of voucher packet creation | The output is a CSV file. | a) Purchased from a vendor | Microsoft | No | The output is a CSV file. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / EOIR | DOJ-0298 | EOIR Adjudicator Notice/Order Writing Assistance | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | a) High-impact | High-impact | Generative AI | To reduce time for EOIR adjudicators to draft notices and orders for immigration cases. After reviewing the facts in the case and conducting a legal analysis of the issues presented, EOIR adjudicators and legal support staff determine their legal conclusions. Technology could be utilized to assist in preparation of a draft document for review by EOIR personnel based on the adjudicator's legal conclusions. | Improvement in the quality of writing (grammar, spelling, clarity, conciseness, etc.), as well as improvements to quality of decisions with more robust citations to the relevant facts in the case and the legal authority used to support the legal conclusions. Reduced time for drafting lengthy orders or notices. Reduced time to complete immigration cases. | Suggested templates for notices. Recommendations for case specific draft orders that include citations to the record and relevant legal authority, with embedded links to allow for efficient review and refinement. | Suggested templates for notices. Recommendations for case specific draft orders that include citations to the record and relevant legal authority, with embedded links to allow for efficient review and refinement. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0299 | Entity Resolution | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Entity Extraction and Resolution | Faster FBI operations | Searchable index of all records associated with distinct individuals. | b) Developed in-house | Yes | Searchable index of all records associated with distinct individuals. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0300 | Entity Extraction and Summarization | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Data triage | Faster FBI operations | Text | Text | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0301 | Enabling eDiscovery Platform AI | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Migrate CORA data to Relativity One or Everlaw, cloud based solutions to enable Relativity's and Everlaw's AI tools for review prioritization, privilege review and other advanced eDiscovery utilities. Utilize internal or third party-tools for AI-assisted collection and data processing, including image and voice recognition and analysis of complex data formats. Integrate E-Discovery tools into case management processes. This initiative transforms litigation support capabilities by leveraging AI to accelerate document review processes, reduce discovery costs, and improve case preparation efficiency and effectiveness. | (1) Increases data availability and accessibility for integration with AI platforms through hosting on a scalable platform, in alignment with DOJ Data Strategy Goal #1 "Enterprise Data Management." Creates replicable enterprise capabilities that other DOJ components can adopt. (2) Allows migration to our existing environments more quickly. Discovery process will be streamlined with increased prioritization of review and lessens the time for manual privilege reviews. (3) AI enabled workflows available to Division litigating teams for privilege review, case strategy and prioritized document review. | Prioritizations, Classifications, Recommendations: document priority rankings, privilege classifications, review recommendations. | Prioritizations, Classifications, Recommendations: document priority rankings, privilege classifications, review recommendations. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0302 | Email Organization Plugin | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | More efficient filing of email | Reduced time spent on administrative record retention requirements | No direct output; sorts and files documents in Outlook and electronic document repositories | No direct output; sorts and files documents in Outlook and electronic document repositories | ||||||||||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0303 | Document Processing | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Use natural language processing to convert spoken audio to text for employees with vision or mobility limitations. | Machine extraction and arrangement of data facilitates review and can allow human reviewers to locate relationships and patterns that would not otherwise be obvious. | Structured data files, including spreadsheets | Structured data files, including spreadsheets | ||||||||||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0304 | Diagram Creation | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Make effective diagrams and illustrations quickly from text prompts for use in briefs and as demonstratives at hearings and trial | More effective advocacy and reduced time in generating effective illustrations | GenAI images and diagrams based on user's input of data | GenAI images and diagrams based on user's input of data | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0305 | Data Triage and Processing | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Computer Vision | Transcription, translation, summarization, and object detection in audio and video | Faster FBI operations | Text | a) Purchased from a vendor | AI Service Provider | No | Text | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0306 | Data Triage | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Manual search for data through many reports | Faster FBI operations | Text | Text | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0307 | Data outlier detection | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Validation of data given to FBI by checking for outliers | Better data quality through targeted human review | Potential data outliers for human review | Potential data outliers for human review | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0308 | Data Call Code Assist Tool | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Automating administrative tasks | Increased efficiency and cost savings, and improved team productivity. | Search query terms | c) Developed with both contracting and in-house resources | Yes | Search query terms | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / EOIR | DOJ-0309 | Customer Service AI Agent/ChatBot | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Members of the public visiting EOIR's website have trouble locating information on the website. | Improving access to information on the EOIR website. | Suggest EOIR webpage where customer can locate the relevant content or information. | Suggest EOIR webpage where customer can locate the relevant content or information. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / EOUSA | DOJ-0310 | Conduit AI | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | a) High-impact | High-impact | Generative AI | Platform is meant to provide a generative AI that can be used for document review and other use cases as a comparison point to other commercially available alternatives. | Increased efficiency, reduced costs, and improved customer experience through its conversational AI platform. | Automated transcriptions, metadata tagging suggestions, and object and facial recognition in media files | a) Purchased from a vendor | Conductor | No | Automated transcriptions, metadata tagging suggestions, and object and facial recognition in media files | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0311 | CoHost AI (Podcast hosting service feature) | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Completing an entire production hosted and published. | Completing an entire production hosted and published in a fraction of the time it would take to complete it manually. | Podcasts are for public-relations or educational, and not used for LE purposes | a) Purchased from a vendor | Buzz sprout | No | Podcasts are for public-relations or educational, and not used for LE purposes | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / Department wide | DOJ-0312 | Code Development | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | More efficient writing and maintenance of code. | Improve developer productivity and streamline development lifecycle. | Coding guidance and suggestions, with the tool providing real-time code completions based on comments and existing code. | a) Purchased from a vendor | GitHub, Microsoft | No | Coding guidance and suggestions, with the tool providing real-time code completions based on comments and existing code. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / EOUSA | DOJ-0313 | Cocounsel AI | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | a) High-impact | High-impact | Generative AI | Platform is meant to provide a generative AI that can be used for document review and other use cases as a comparison point to other commercially available alternatives. | Increases USAO efficiency through rapid analysis of legal documents, improves accuracy by reducing manual review errors, and assists offices that are not fully staffed by doing more routine tasks and allowing legal professionals to focus on strategic and high-value work | Automated transcriptions, metadata tagging suggestions, and object and facial recognition in media files | a) Purchased from a vendor | Thomson Reuters | No | Automated transcriptions, metadata tagging suggestions, and object and facial recognition in media files | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | ||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0314 | Claims Program Predictive Fraud Analytics | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | Deploy advanced AI analytics to detect fraudulent claims and suspicious patterns in compensation claims programs, including the September 11th Victims Compensation Fund, Radiation Exposure Compensation Act, Camp LeJeune Justice Act, and other federal victim assistance programs. This initiative protects program integrity, ensures resources reach legitimate claimants, and maintains public trust in federal compensation systems. Aligns with Administration priorities on combating fraud, protecting taxpayer funds, and ensuring justice for claimants. | (1) Strengthens the Civil Division's capabilities in administering victim compensation programs by identifying potentially fraudulent medical claims, duplicate submissions, and identity fraud. (2) Builds upon existing Civil Division case management systems and medical claim review processes. Leverages ongoing fraud detection initiatives across DOJ components, integrates with established medical record verification systems, and utilizes existing partnerships with healthcare providers and medical review contractors. (3) Improvement in fraudulent claim detection rates, prevention of fraudulent payouts annually, reduction in false positive flags affecting legitimate claimants, faster claim processing times for verified submissions, and improved coordination metrics with investigative agencies on fraud referrals. | Predictions, Classifications, Scores: fraud probability scores, claim authenticity classifications, risk alerts. | Predictions, Classifications, Scores: fraud probability scores, claim authenticity classifications, risk alerts. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / CRT | DOJ-0315 | Civil Rights Public Reporting Portal | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | With limited staff, we use AI to summarize incoming public complaints to assist in determining if a complaint is actionable by our teams. | Lower backlog and faster response to the public. | Generates report summaries and tags on reports for CRT staff analysis. | Generates report summaries and tags on reports for CRT staff analysis. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0316 | Chatbot to Answer Internal Employee Policy Queries | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | a) High-impact | High-impact | Natural Language Processing (NLP) | It is often challenging for DEA employees to manually search through our voluminous collection of manuals, books, CBP chemical codes, CFR, U.S.C., etc. to find an answer to their specific questions about policy, law, and rules. Training the AI on these materials enables it to answer employee queries comprehensively and quickly, thereby saving employees a lot of time. | AI can provide comprehensiveness answers to employees' questions much more quickly than if the employees had to search and find the answers themselves. In addition, the AI can identify content that needs to be revised or added to effectively provide answers. | The solution will enable users to ask questions through a chatbot interface, where the AI system, trained on the Agents Manual, will generate comprehensive answers and recommendations. These responses will be sourced from all relevant materials and include hyperlinks to the original references for easy access and verification. | c) Developed with both contracting and in-house resources | OpenAI | No | The solution will enable users to ask questions through a chatbot interface, where the AI system, trained on the Agents Manual, will generate comprehensive answers and recommendations. These responses will be sourced from all relevant materials and include hyperlinks to the original references for easy access and verification. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0317 | Chatbot | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | General AI assistance | Faster FBI operations | Multimodal output | Multimodal output | ||||||||||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0318 | Case Management System Integration | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Leverage AI capabilities to support case management, such as creating codes for events in cases in order to track their progress and estimating time to completion to help managers evaluate the resource needs of the case. | Reduced administrative overhead and leaner management structure | GenAI text; other precise types of outputs not yet known | GenAI text; other precise types of outputs not yet known | ||||||||||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0319 | Camtasia | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | This allows USMS training content creators to generate on-screen presenters and realistic narration from text for global accessibility. | Enhancing the learners online learning experience by improving accuracy of services, supporting 508 compliance and reduce production time to release of training materials to USMS employees. | Video generation and generated text from speech. | a) Purchased from a vendor | TechSmith | No | Video generation and generated text from speech. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / Department wide | DOJ-0320 | Business Intelligence Tools | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Assists with data discovery and visualization. | The embedded data analytic capabilities will increase the efficiency and effectiveness to locate and analyze data. | Tables, graphs, link-node diagrams, and other visualizations of extracted data. | a) Purchased from a vendor | Tableau, PowerBI | No | Tables, graphs, link-node diagrams, and other visualizations of extracted data. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0321 | Business Form Generation | a) Pre-deployment – The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Manual time-intensive process | Faster FBI operations | Document | Document | ||||||||||||||||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0322 | Bloomberg (AI Assisted Legal Research) | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Provides AI-assisted research to help streamline legal research and document review. | Increases the speed at which attorneys can review and evaluate case law to determine if it is applicable to their current investigations. | Summarization of caselaw | Summarization of caselaw | ||||||||||||||||||||
| Department Of Justice | Department of Justice / TAX | DOJ-0323 | Big Data Visualization Tools | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Improving analytics and support for document review | Open-source tools that we believe could assist us with graphing relationships and other forms of data visualization. | Machine learning text, images, and diagrams | Machine learning text, images, and diagrams | ||||||||||||||||||||
| Department Of Justice | Department of Justice / EOIR | DOJ-0324 | Background Searches | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | To perform background searches to inform an EOIR character and fitness determination for individuals applying to be an EOIR accredited representative. | Timely, comprehensive, and efficient background check. Character and fitness determinations made based on accurate background information. EOIR approves accreditation applicants with the requisite character and fitness. Individuals in EOIR immigration proceedings are assisted by accredited representatives with the requisite character and fitness. | Background check findings. | a) Purchased from a vendor | TransUnion Risk and Alternative Data Solutions, Inc. | Yes | Background check findings. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | No | b) In-progress | Consistent with Executive Orders and OMB guidance, the case owner relied on DOJ AI governance practices to evaluate impacts and risks. | d) In-progress | b) Development of monitoring protocols is in-progress | b) Establishment of sufficient and periodic training is in-progress | c) In-progress | c) Establishment of an appropriate appeal process is in-progress | e) In-progress | ||||||
| Department Of Justice | Department of Justice / EOUSA | DOJ-0325 | Azure AI Foundry Platform | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | This AI solution is meant to provide chatbot responses in a secure enclave. In addition, this platform is meant to provide a generative AI that can be used for document review and other use cases as a comparison point to other commercially available alternatives. | Decreased cost to operate AI compared to other commercially available solutions. | Text outputs | c) Developed with both contracting and in-house resources | Microsoft | No | Text outputs | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | Yes | |||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0326 | AWS Textract | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | AWS Textract would be used to read the content off of certain PDF's sent into USMS by partner agencies. It will prepopulate screens for the user to review against a mailed in PDF. | Reduce processing time by not having to rekey in information off of text based documents. | OCR/ Scanned Data from PDFs | OCR/ Scanned Data from PDFs | ||||||||||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0327 | AWS Rekognition | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Computer Vision | AWS Rekognition suite would be used to prevent the creation of duplicate records in the USMS Capture System. The application would index faces already in the Capture system then search new entries against the existing database to flag and help determine if a possible existing record exists for a new subject. | The expected benefits to the agency would be higher data quality, lowered risk of duplicate FID creation, and faster intakes. | Possible matches for intake data on existing records that appear as options for decision-making. | Possible matches for intake data on existing records that appear as options for decision-making. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0328 | Audio Clarity Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Audio clarity | Higher quality data | Audio | Audio | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0329 | Audio and Written Transcription and Translation | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | a) High-impact | High-impact | Classical/Predictive Machine Learning | This technology automates the transcription and translation of Spanish and Mandarin Chinese audio files from lawfully seized devices and authorized communications. | The immediate benefits are speed and lower cost, and to enable investigators to quickly identify which parts of the conversations should be reviewed and interpreted by human translators. | Outputs transcription of the original language and the English translation with speaker differentiations including search results of the predetermined relevant terms defined by the analyst. | c) Developed with both contracting and in-house resources | MIT Lincoln Laboratory | No | Outputs transcription of the original language and the English translation with speaker differentiations including search results of the predetermined relevant terms defined by the analyst. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | This AI use case utilizes one or more of the demographic variables listed in compliance with all federal laws and agency regulations. | Yes | ||||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0330 | Audiate | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | This allows USMS training content creators to utilize text-to-speech generation with audio from a wide range of voices and tones without the need of additional actors. | Enhancing the learners online learning experience by improving accuracy of services, supporting 508 compliance and reduce production time to release of training materials to USMS employees | Audio files and generated speech from text. | a) Purchased from a vendor | Techsmith | No | Audio files and generated speech from text. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the Above | No | |||||||||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0331 | ATR Generative Artificial Intelligence Test | b) Pilot – The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Facilitate access to AI tools by ATR personnel. | This initiative will enable ATR to embrace AI technology responsibly consistent with Presidential Action and various departmental guidance, allowing ATR personnel to optimize and modernize their work processes. | Open-source research, summarizing publicly available documents. | a) Purchased from a vendor | https://www.harvey.ai/ , https://openai.com/ , https://www.perplexity.ai/ | No | Open-source research, summarizing publicly available documents. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | No | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / ATR | DOJ-0332 | ATR Expert/Consulting with Bates White | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Law Enforcement | Deployed | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | The AI tool will ingest all the data sets and work to standardize them so that the Bates White expert and team can use the data to run economic models. | Assist in preliminary data processing, including document summarization, name standardization, and text extraction. This includes “cleaning” the data for additional processing, performing the equivalent of a “find and replace” function to standardize names. The Bates White system may use AI to summarize the general content of documents. | The output data may be provided to experts and support staffs working for the State Attorneys General who are cooperating with ATR on investigations and litigations. The output data will be returned to ATR for use by internal ATR economists. | a) Purchased from a vendor | Bates White, using Microsoft Azure | No | The output data may be provided to experts and support staffs working for the State Attorneys General who are cooperating with ATR on investigations and litigations. The output data will be returned to ATR for use by internal ATR economists. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0333 | AI-powered Legal Research | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | DOJ attorneys spend significant time manually researching case law and regulatory precedents across jurisdictions, often missing obscure authorities or evolving standards. This initiative deploys an AI-powered legal research platform that synthesizes case law, flags conflicts or shifts in standards, and provides confidence scores for relevance, automatically updating as new decisions are published. Aligns with E.O. 14179’s innovation directive and M-25-21’s efficiency requirements. | (1) Enhances DOJ's litigation effectiveness by ensuring comprehensive legal research, reducing research time per case, and improving argument quality through better precedent identification. Strengthens government's ability to defend federal programs and policies with more thorough legal foundations. (2) Builds on existing Westlaw/Lexis subscriptions, DOJ brief bank, and PACER databases. Integrates with current legal research workflows and citation management systems. (3) Reduction in research hours per brief; increase in relevant precedents cited; improved appellate success rates; measurable improvement in legal argument comprehensiveness. | Recommendations, Classifications, Scores: research recommendations, precedent relevance scores, conflict alerts. | Recommendations, Classifications, Scores: research recommendations, precedent relevance scores, conflict alerts. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0334 | AI-generated Content Detector | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Federal litigators face increasing challenges with AI-manipulated evidence and documents from opposing parties. An AI tool could be used to flag potentially AI-generated content. | (1) Enhanced litigation integrity by identifying potentially manipulated evidence, protecting court proceedings from AI-generated misinformation and ensuring compliance with local court rules. (2) Builds on existing document review platforms and federal privilege protection protocols. (3) Improved evidence verification accuracy; reduced risk of submitting hallucinated data; enhanced compliance with AI disclosure requirements; increased attorney confidence in document authenticity. | Classifications, Predictions: AI-generation probability scores, document authenticity flags, content manipulation alerts, disclosure requirement notifications. | Classifications, Predictions: AI-generation probability scores, document authenticity flags, content manipulation alerts, disclosure requirement notifications. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0335 | AI-Enabled Workflow Automation | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Office of Immigration Litigation (OIL) faces significant backlogs, with repetitive or frivolous claims slowing progress and foreign-language evidence creating bottlenecks. This initiative uses Natural Language Processing for triage and automate translation to accelerate case processing. It aligns with M-25-21’s innovation and governance principles by improving efficiency while preserving human oversight. | (1) Reduces immigration case backlogs, ensures consistent treatment of claims, and improves responsiveness to federal courts. (2) Builds on OIL case management systems, United States Citizenship and Immigration Services data feeds, and DOJ translation contract costs.(3) Reduced backlog size; fewer attorney hours per case; translation accuracy benchmarks met. Increase in fraudulent/frivolous and repetitive claims. | Classifications, Automated Translations, Recommendations: case priority classifications, language translations, processing recommendations. | Classifications, Automated Translations, Recommendations: case priority classifications, language translations, processing recommendations. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0336 | AI-enabled Legal Argument Harmonization | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | Conflicting arguments across DOJ branches can weaken credibility in appellate courts. This initiative deploys AI to mine arguments across briefs, harmonize DOJ positions, and validate citations. It advances M-25-21’s governance requirement for consistent positions and E.O. 14179’s push for efficiency. | (1) Enhances DOJ credibility before the court; avoids conflicting arguments. (2) Uses DOJ appellate brief bank and legal research/citation tools. (3) Reduction in conflicting arguments; % of briefs citation-validated; improved appellate outcomes. | Recommendations, Validations: argument alignment suggestions, citation verification, position harmonization recommendations. | Recommendations, Validations: argument alignment suggestions, citation verification, position harmonization recommendations. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0337 | AI-enabled Compliance Verification | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | Federal agencies and recipients of federal funding must certify compliance with various civil rights statutes, but DOJ lacks efficient methods to verify the accuracy of these certifications. Manual review of compliance documentation is resource-intensive and often occurs only after complaints are filed, allowing violations to persist and potentially expand. False certifications can result in continued federal funding to non-compliant entities, undermining civil rights enforcement and wasting taxpayer resources. Aligns with E.O. 14179's innovation requirements and M-25-21's public trust and governance pillars. | (1) Strengthens DOJ's ability to enforce civil rights laws through the False Claims Act by identifying clear-cut compliance violations earlier in the process. Protects taxpayer funds from flowing to entities that falsely certify compliance while ensuring federal programs achieve their intended civil rights objectives (2) Leverages FCA compliance databases, Civil Rights Division patterns, and initial whistleblower submission channels. (3) Focus on objective, verifiable metrics such as statistical disparities in outcomes, missing required documentation, or contradictions between certifications and published policies. Number of suspicious certifications flagged; investigations initiated; successful FCA settlements or recoveries. | Classifications, Predictions, Recommendations: compliance risk assessments, violation predictions, investigation recommendations. | Classifications, Predictions, Recommendations: compliance risk assessments, violation predictions, investigation recommendations. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0338 | AI-Enabled Briefing Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Generative AI | Federal Programs Branch (FPB) attorneys face overwhelming records and pressure to maintain consistent arguments across circuits. FPB faces immense workloads defending statutes and federal programs, often requiring rapid analysis of massive records and consistent legal arguments across circuits. This initiative will deploy a retrieval-augmented generation (RAG) tool trained on DOJ filings and administrative records to accelerate drafting and ensure consistency. It directly aligns with E.O. 14179 (removing barriers to AI adoption) and OMB M-25-21 (innovation, governance, and public trust). | (1) This initiative strengthens the federal government’s ability to protect statutory authority and defend policy actions across all agencies. (2) Reduction in attorney hours per brief; mitigation of hallucinated citations; measurable improvement in argument consistency across cases. (3) Builds on DOJ’s existing “brief bank,” eDiscovery platforms, and PACER data archives. | Content Generation, Recommendations: draft legal briefs, argument suggestions, citation recommendations, consistency checks. | Content Generation, Recommendations: draft legal briefs, argument suggestions, citation recommendations, consistency checks. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0339 | AI-driven Fraud Detection | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Classical/Predictive Machine Learning | Fraud diverts billions in taxpayer funds but manual review of records is too slow to catch early misconduct. Fraud cases involve sifting through vast amounts of structured and unstructured data, often too large for manual review to detect early fraud signals. This initiative will apply AI-enabled anomaly detection to efficiently synthesize insights from vast data collections and uncover fraudulent patterns. It aligns with M-25-21’s public trust priority by safeguarding taxpayer funds and E.O. 14179’s innovation directive. | (1) Bolsters DOJ’s mission to prevent waste, fraud, and abuse in taxpayer-funded health programs, strengthening enforcement under the FCA. (2) Builds on existing Medicare/Medicaid data feeds, OIG case frameworks, previous FCA healthcare enforcement analytics, and ongoing interagency fraud task force initiatives.(3) Increase in early identification of false claims; recovery dollars secured; reduced investigation timelines. | Predictions, Classifications, Alerts: fraud risk scores, anomaly alerts, pattern classifications. | Predictions, Classifications, Alerts: fraud risk scores, anomaly alerts, pattern classifications. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0340 | AI-assisted Settlement Data and Risk Analysis | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Settlements require knowledge of both valuation and risk for frequent types of litigation. An AI tool could pull both structured data from settlement databases, together with unstructured settlement memoranda, and analyze settlement risk, valuation, and qualitative factors. Users can use natural language to query one or both sources of data. | (1) Improved settlement decision-making through comprehensive risk and valuation analysis, leading to more favorable outcomes for the government and taxpayers. (2) Builds on existing Salesforce migration initiatives and settlement databases while adding natural language query capabilities. (3) Reduced attorney time per settlement analysis; improved consistency in settlement valuations; better risk assessment accuracy; enhanced ability to identify settlement patterns and trends. | Analytics, Predictions, Recommendations: settlement risk assessments, valuation analyses, pattern identification, natural language query responses from structured and unstructured data. | Analytics, Predictions, Recommendations: settlement risk assessments, valuation analyses, pattern identification, natural language query responses from structured and unstructured data. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0341 | AI-assisted Legacy Code Modernization | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | This initiative uses AI-assisted code translation and refactoring tools to automatically convert legacy code into modern, secure languages (e.g., Java, Python) while flagging logic gaps and optimizing for cloud environments. Aligns with E.O. 14179’s innovation directive and M-25-21’s governance requirements. | (1) Modernized applications improve resilience, reduce security risk, and lower long-term IT O&M costs, directly supporting DOJ’s modernization and cybersecurity priorities. (2) Builds on DOJ CIO modernization roadmaps, Federal IT dashboards, and prior migration initiatives to cloud platforms. (3) Reduction in legacy system maintenance costs, outages and performance loss; # of applications successfully migrated; cybersecurity vulnerabilities reduced. | Code Generation, Recommendations, Predictions: modernized code output, optimization recommendations, vulnerability predictions. | Code Generation, Recommendations, Predictions: modernized code output, optimization recommendations, vulnerability predictions. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / OJP | DOJ-0342 | AI Sandbox for exploration and education on AI | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Help identify enhancement in mission capabilities Explore Risk management Help identify AI Use cases with OJP Educate AI enable workforce | Enhance OJP workforce to work faster Reduce number of software that AI can achieve Improving access to data | Risks assessments Recommendations Decision | Risks assessments Recommendations Decision | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0343 | AI Powered Data Governance | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | This initiative deploys AI-driven data governance and metadata management to auto-tag, catalog, and enforce retention, while identifying duplicate/low-value files. Aligns with M-25-21 governance & E.O. 14179 innovation. | (1) Reduces costs, improves compliance with records management/FOIA, and data governance. Boosts transparency by making DOJ data discoverable and reusable. (2) DOJ records systems, NARA retention schedules, existing FOIA/eDiscovery platforms. (3) Measurable data storage savings; % of files tagged with metadata; improved FOIA response times. | Classifications, Recommendations, Automated Actions: content categorization, retention recommendations, duplicate identification. | Classifications, Recommendations, Automated Actions: content categorization, retention recommendations, duplicate identification. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0344 | AI Personal Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Complex litigation requires intensive time and project management. AI assistants can analyze calendars, emails, case-tracking sheets, and schedules. It can flag deadlines, note high-priority tasks, and suggest productivity techniques for managing complex projects. Attorneys could create additional notifications or project management integrations to improve efficiency. | (1) Improved attorney productivity and case management efficiency, enabling better service delivery to client agencies and more effective litigation outcomes. (2) Builds on existing calendar systems, email platforms, and case-tracking databases while maintaining attorney-client privilege protections. (3) Reduced missed deadlines; improved task prioritization; enhanced productivity metrics; better work-life balance for attorneys; increased case management efficiency. | Recommendations, Alerts: deadline notifications, task prioritization suggestions, productivity optimization recommendations, calendar conflict alerts, project milestone tracking. | Recommendations, Alerts: deadline notifications, task prioritization suggestions, productivity optimization recommendations, calendar conflict alerts, project milestone tracking. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0345 | AI Meeting Assistant | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | This initiative deploys AI-enabled meeting assistants that provide real-time transcription, generate concise summaries, identify action items, and tag outcomes to case files. Aligns with E.O. 14179’s innovation goals and M-25-21’s efficiency and transparency pillars. | (1) Increases productivity by ensuring institutional knowledge is captured, searchable, and integrated into case management systems, reducing duplication and oversight risks. (2) Integrates with Microsoft Teams, Outlook, OneNote, and other knowledge management systems. (3) Increase proficiency in % of meetings transcribed and summarized; attorney time saved; # of action items captured and completed. | Transcriptions, Summaries, Extractions: meeting transcripts, summary reports, action item lists, outcome tags. | Transcriptions, Summaries, Extractions: meeting transcripts, summary reports, action item lists, outcome tags. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0346 | Mass Claim Tool | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Components with mass claims, such as RECA must respond to tens of thousands of similar claims within statutorily defined timelines, or the U.S. forfeits its defenses. Tools to quickly process mass claims, including template letter generation, claim categorization, and automated data entry of standardized filings could meet urgent needs. | (1) Ensures statutory compliance for mass claims processing, protecting the government's legal defenses while providing timely relief to eligible claimants. (2) Builds on existing RECA program infrastructure and mass claims databases while incorporating specialized medical data handling and privilege protections. (3) Achievement of statutory processing deadlines; reduced attorney hours per claim; improved consistency in claim categorization; automated template generation; enhanced quality control processes; increased claimant satisfaction through faster processing. | Content Generation, Classifications, Automation: template letters, claim category assignments, automated data entry, standardized filing generation, quality control flags. | Content Generation, Classifications, Automation: template letters, claim category assignments, automated data entry, standardized filing generation, quality control flags. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / CIV | DOJ-0347 | AI Evidence and Claim Consolidation | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Natural Language Processing (NLP) | This initiative applies AI to synthesize records, summarize expert reports and depositions, and identify duplicate claims. It also uses AI to identify inconsistencies between records and plaintiff claims, identify red flag legal issues, and create templates to respond to frequent or high-volume litigation. It also allows for analysis of settlement and damages databases to identify outlier trends. It supports M-25-21’s public trust pillar and E.O. 14179’s innovation agenda. | 1) Improves the government’s litigation posture in high-value torts, reduces exposure to excessive payouts, and ensures equitable and efficient claims processing.(2) Builds on DOJ medical record review systems and HHS/VA data integration, along with Relativity/CORA settlement, damages, and entitlement databases. (3) Faster evidence review; detection and elimination of duplicate claims; creating more effective and consistent settlements; increased dismissal or settlement of weak claims; attorney time saved. | Summaries, Classifications, Predictions: document summaries, duplicate detection, inconsistency flagging, settlement predictions. | Summaries, Classifications, Predictions: document summaries, duplicate detection, inconsistency flagging, settlement predictions. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / Department wide | DOJ-0348 | AI Cloud Environments | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | FedRAMP authorized environments are used to deploy tools that enable data-driven decision-making. | To support expedited data collaboration and analytics. | Outputs vary by use case. | Outputs vary by use case. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / JMD | DOJ-0349 | AI CLIN Generation | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Determine the number of contract lines needed to meet contract requirements, outlines each line, and populates certain data for review within Unified Financial Management System. | Reduce the time and effort of manual work to create contracts within Unified Financial Management System. | Suggested contract lines, descriptions, and certain system fields for review/approval by users. | Suggested contract lines, descriptions, and certain system fields for review/approval by users. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBI | DOJ-0350 | AI Chatbot | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Time intensive review of policy content | Faster FBI operations | Text with citations | Text with citations | ||||||||||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0351 | Adobe Premiere | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Seamless editing of videos and speech transcripts. | The videos are for public-relations or educational, and not used for LE purposes. | Provides a quality video for viewing for educational purposes. | a) Purchased from a vendor | Adobe | No | Provides a quality video for viewing for educational purposes. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / FBOP | DOJ-0352 | Adobe Photoshop | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Generative and image enhancement features, selection and workflow improvements. | Better graphics and adjustments of photos for professional quality results. | Professional quality photos and graphics. Photos are for public-relations or educational, and not used for LE purposes | a) Purchased from a vendor | Adobe | No | Professional quality photos and graphics. Photos are for public-relations or educational, and not used for LE purposes | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / USMS | DOJ-0353 | Acquisition Support Tool | c) Deployed – The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | The problem VAO Ally is intended to solve is the time and resource burden of navigating complex procurement regulations in an office that is already operating lean, even at near full staffing. . Ally quickly summarizes public-domain contracting regulations, simplifying technical language, and providing easy access to relevant rules and resources. This reduces the time staff spend searching for answers, minimizes the risk of misinterpretation, and allows contracting professionals to focus on the judgment-based, legally binding decisions that only they can make. | The expected benefits of VAO Ally are faster access to accurate procurement guidance, reduced administrative burden on contracting professionals, and greater consistency in interpreting acquisition regulations. For the agency, this means improved efficiency in procurement processes, fewer delays caused by staff shortages or vacancies, and better use of limited resources to focus on mission-critical decision making. For the public, the outcome is a procurement workforce that can respond more quickly and effectively to agency needs, ultimately supporting timely delivery of government services and safeguarding taxpayer dollars. | Plain-language summaries, references to regulations, and simplified guidance that users can apply as part of their own professional judgment. The tool may suggest possible resources or interpretations, but the final decision-making authority rests entirely with a warranted Contracting Officer. | a) Purchased from a vendor | Virtual Acquisition Office (VAO) Ally | No | Plain-language summaries, references to regulations, and simplified guidance that users can apply as part of their own professional judgment. The tool may suggest possible resources or interpretations, but the final decision-making authority rests entirely with a warranted Contracting Officer. | The case owner relied on DOJ AI governance practices to select and prepare data, as well as evaluate performance. | Yes | None of the above | No | |||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0354 | Simulate Regulatory Audits to Train Diversion Investigators and Improve Audit Protocols | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | Enables simulations of DEA Diversion audits that can be used to train new Diversion Inspectors while evaluating DEA audit protocols for inconsistencies and vulnerabilities-to generate best practice models. | Ultimately, this will improve the efficiency and quality of regulatory audits, which will increase the number of civil fines that we issue. | Unknown | Unknown | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0355 | Parse and Capture Data Submitted by Laboratories to the NFLIS Program | a) Pre-deployment – The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Agentic AI | Collecting and entering information from registrants in the National Forensic Laboratory Information System (NFLIS) program takes considerable time and effort and is prone to human error | Cost savings, reduced customer wait times, and improved accuracy of reporting. | Extraction of information into a matrix of rows and columns which links identified substances to individual drug exhibits. | Extraction of information into a matrix of rows and columns which links identified substances to individual drug exhibits. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0356 | Managing Document Digital Signatures | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | It is currently time-consuming, and challenging, to collect digital signatures from DOJ personnel into one source document. Utilizing AI would intend to manage the document workflow by using a single upgraded storage location with advanced security features. | Saves time and effort by integrating identity verification while reducing the amount of storage required for documents. | DEA/DOJ digital cloud base signatures can be integrated with an existing AI platform/application. AI outputs: AI assisted review, automated tagging, custom extractions, agreement summaries and chatbot for user help. | DEA/DOJ digital cloud base signatures can be integrated with an existing AI platform/application. AI outputs: AI assisted review, automated tagging, custom extractions, agreement summaries and chatbot for user help. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0357 | Creating and Maintaining IT Security Packages for Authorizations | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Cybersecurity personnel want to automate the identification of security controls that can have implementation statements, create and maintain security packages for authorizations, keep up with compliance requirements, and more quickly onboard new systems and applications. | Reduces costs while enabling use to maintain compliance and consistency | Generated implementation statements for security packages, reporting, trending, dashboarding | Generated implementation statements for security packages, reporting, trending, dashboarding | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0358 | Community Outreach Chatbot that Helps the Public Consume Prevention Resources | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Agentic AI | Many citizens may not have the time to explore the copious library of DEA Community Outreach and Prevention Support (CPO) resources. By training AI on these resources, citizens can ask plain language questions and get succinct answers. | Increased usage of drug use prevention resources by citizens. Extends the reach of CPO into communities who have a high risk of drug abuse. | 1. Delivery of DEA publications (digital, hard copies for individuals, and bulk copies for organizations / events); 2. Connect users with best content fit; 3. Allow users to request DEA participation in community events; 4. Increase user skills and understanding to prevent substance use through the generation of answers derived from DEA content; 5. Connect users with DEA partners that provide content outside of CPO scope. | 1. Delivery of DEA publications (digital, hard copies for individuals, and bulk copies for organizations / events); 2. Connect users with best content fit; 3. Allow users to request DEA participation in community events; 4. Increase user skills and understanding to prevent substance use through the generation of answers derived from DEA content; 5. Connect users with DEA partners that provide content outside of CPO scope. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0359 | Chatbot to Answer Diversion Registrants' Queries About Registration and Compliance Issues | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | TOMSS (Technical Operations Management Support System will provide call-deflection to ease the customer interaction burden on call center representatives, registration program specialists, and Diversion Investigators. The AI will provide consistent, accurate responses to repetitive queries and replace manual workloads. | 30–40% reduction in repetitive calls and emails. • 15–25% reduction in agent workload. • 100% of chatbot responses sourced from verified DEA policy. • 24/7 access to authoritative self-service guidance for registrants. | • Policy-grounded responses to registrant inquiries (text output). • Deflection metrics and analytics (call volume reduction, FAQ trends). • Context-based prompts or redirects to DEA.gov resources. • Guardrail logic to return “no response” for non-registrant or ungrounded prompts. | • Policy-grounded responses to registrant inquiries (text output). • Deflection metrics and analytics (call volume reduction, FAQ trends). • Context-based prompts or redirects to DEA.gov resources. • Guardrail logic to return “no response” for non-registrant or ungrounded prompts. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0360 | Capture and Convert Structured Data from Scanned Case Documents to Support Advanced Analysis and Trend Forecasting | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | a) High-impact | High-impact | Generative AI | The Targeting & Special Projects Unit (DOIT) has identified a critical need to capture, convert, and process both structured and unstructured text from scanned case documents to support advanced analysis and trend forecasting. This initiative will leverage an optical character recognition (OCR) solution capable of extracting both typed and handwritten content from diverse sources, including medical notes, invoices, and statements. Captured text will be transformed into a standardized, machine-readable format (e.g., CSV) and integrated into a relational database. From there, advanced analytical techniques will be applied to reveal hidden structures, patterns, and relationships within the data. By unlocking this information, we aim to enhance our ability to anticipate trends, strengthen investigative strategies, and move toward a more predictive, data-driven approach | Save valuable investigative time that can be used to focus on the results of the analysis. Conducting more comprehensive analysis on the data will improve trend forecasting, strengthen investigative strategies, and support a more predictive, data-driven approach. | The anticipated output for this AI use case will include text for documentation and CSV for excel spreadsheet analytical work. | The anticipated output for this AI use case will include text for documentation and CSV for excel spreadsheet analytical work. | |||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0361 | Auditing Diversion Registrant Inventories of Controlled Substances | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | c) Not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Diversion Investigators regularly conduct accountability audits of Registrants' management of controlled substances in order to detect diversion. Registrants sometimes submit hundreds of pages of invoices, receipts, and logs, which must be reconciled against inventory and movement records to ensure that the audits balance. Diversion Investigators review these and mark them for review if there is non-compliance with regulations, such as missing information or missed signatures. This current workflow relies on manual data entry into spreadsheets, leading to errors, inconsistencies, and excessive investigative time. | Reduces the amount of investigator time per audit by automating the extraction of key fields (e.g., item, quantity, cost, dates) from invoices and auto-populating computation charts. Standardized extraction and reconciliation eliminates human entry errors and decreases audit reconciliation error rates. Real-time flagging of discrepancies (e.g., inventory errors, excess sales, mismatched destruction records, possible fraudulent records) hastens the detection of potential diversion. Frees investigators to focus on high-value investigative activities and field operations, saving thousands in costs. Provides an auditable trail of extraction and reconciliation for internal and external audits. Maintains full audit-ready compliance for all registrants and a tangible audit trail for legal proceedings. | Outputs may be data reports to support Diversion Control Division's mission and better serve the registrant community. | Outputs may be data reports to support Diversion Control Division's mission and better serve the registrant community. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0362 | Analyzing Prescription Monitoring Program Data | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Classical/Predictive Machine Learning | DEA investigators currently rely on manual manipulation of data in spreadsheets to conduct Prescription Monitoring Program (PMP) analysis. This requires 8-10 hours of labor (data extraction, cleaning, cross referencing, narrative synthesis), and, since investigators analyze data in different ways, results vary. There is little consistency across the agency or even between investigators in terms of what data is examined and there is no easy way to identify patterns such as MMEs, combinations, and early fills. | Increases operational efficiency by automating data preparation, risk identification, and initial narrative generation. Average time spent to produce a PMP decreased from ten to two hours (minus 80%). By employing consistent, data-driven risk analysis, increases the proportion of high-value investigations that result in successful enforcement or administrative outcomes. Reallocates Diversion Investigator effort to higher-value tasks (e.g., field operations, strategic planning), producing a labor cost benefit in the thousands per Diversion Investigator based on average pay and number of PMPs analyzed. Faster identification of prescribing anomalies supports earlier public health interventions and reduces community exposure to diverted pharmaceuticals. A single, centrally managed AI service can be provisioned to all DEA field offices, ensuring uniform analytic standards. | Quantitative data reports and analyses | Quantitative data reports and analyses | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0363 | Analyze Financial Reports to Identify Linkages Across Investigations | a) Pre-deployment – The use case is in a development or acquisition status. | Law Enforcement | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Natural Language Processing (NLP) | Comprehensive analytical review of reports related to money laundering, specifically identifying commonalities across multiple investigations. | Analysts no longer have to spend time manually collecting data from reports and importing it into PowerBI dashboards as a first step in linking investigations and showing a larger picture of criminal networks. | Quantitative data reports and analyses to inform agents and intel analysts. | Quantitative data reports and analyses to inform agents and intel analysts. | ||||||||||||||||||||
| Department Of Justice | Department of Justice / DEA | DOJ-0357 | Creating and Maintaining IT Security Packages for Authorizations | a) Pre-deployment – The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | b) Presumed high-impact but determined not high-impact | Not high-impact | Does not produce an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on any of the individuals or entities identified in OMB-25-21. | Generative AI | Cybersecurity personnel want to automate the identification of security controls that can have implementation statements, create and maintain security packages for authorizations, keep up with compliance requirements, and more quickly onboard new systems and applications. | Reduces costs while enabling use to maintain compliance and consistency | Generated implementation statements for security packages, reporting, trending, dashboarding | Generated implementation statements for security packages, reporting, trending, dashboarding | ||||||||||||||||||||
| Department Of Labor | DOL-02 | Language Translation | Pre-deployment | Pre-deployment | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-03 | Audio Transcription | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-04 | Text to Speech Conversion | Pre-deployment | Pre-deployment | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-11 | Electronic Records Management | Pre-deployment | Pre-deployment | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-12 | Call Recording Analysis | Pre-deployment | Pre-deployment | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-13 | Automatic Document Processing | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-15 | Generative AI Assistant (AI Center) | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-19 | Occupational Employment and Wage Statistics (OEWS) Occupation Autocoder | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-20 | Scanner Data Product Classification | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-21 | Expenditure Classification Autocoder | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-22 | PII Redaction | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-23 | Workforce Recruitment Program Website Chatbot Assistant | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-27 | Worker Paid Leave Usage Simulation (Worker PLUS) Microsimulation Program | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-28 | Computer-Assisted Coding: Survey of Occupational Injuries and Illnesses (SOII) Autocoder | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-29 | Census of Fatal Occupational Injuries (CFOI) Record Matching | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-32 | Note Taking Bot | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-35 | Current Population Survey Off-the Clock (CPS OTC) Prediction | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-36 | Sample Refinement: Frame API | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-37 | Consumer Expenditure (CE) Interview Item Code Estimation | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-38 | Consumer Expenditure (CE) Interview Imputations | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-39 | Quarterly Census of Employment and Wages (QCEW) North American Industry Classification System (NAICS) Autocoder | Pilot | Pilot | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-40 | Comment Actionability Likelihood Score | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-41 | Computer-Assisted Review: Occupational Requirements Survey (ORS) Autocoder | Pre-deployment | Pre-deployment | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-42 | Producer Price Index (PPI) Price Tolerance Prediction | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-43 | Employee Benefits Security Administration (EBSA) Case File Summarization | Pre-deployment | Pre-deployment | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-45 | Natural Language Processing (NLP) Tool for Bureau of International Labor Affairs (ILAB) | Pre-deployment | Pre-deployment | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-46 | DAISI (DOL AI Search Insights) | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of Labor | DOL-47 | Employment and Training Administration (ETA) Grants Monitoring Tool through Doc Explorer | Deployed | Deployed | ||||||||||||||||||||||||||||||
| Department Of State | A/PRI | AI Input in Translation | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Generative AI | Leveraging machine-based tools to streamline workflows in translation work. Machine output is carefully post-edited by professional human translators to ensure highest quality product. | Reduced time, reduced cost, and improved accuracy of translated documents. | Translated text in draft form. | a) Purchased from a vendor | RWS | Yes | Translated text in draft form. | Memory modules | No | k) None of the above | No | |||||||||||||||
| Department Of State | A/SKS | FOIA Web ML Document Indexer | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | A/SKS | AI-Augmented Declassification Review | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | a) High-impact | High-impact | Classical/Predictive Machine Learning | The amount of documents, particularly cables and emails, that require declassification review increases exponentially in the next few years. Manual review is unsustainable and expensive given the number of cables (in the hundreds of thousands) and emails (increasing from hundreds of thousands to millions). | The expected benefits and positive outcomes from using AI are cost savings by reducing the need of manual reviews, reduce labor in cable review by up to 80%, reduce the time needed for annual review, and create more consistency in the review process. | The AI system's outputs are binary classification predictions for documents on whether a document should be declassified or exempt from declassification and multiclassification for reasons for exemption. | 02/02/2023 | c) Developed with both contracting and in-house resources | Deloitte | No | The AI system's outputs are binary classification predictions for documents on whether a document should be declassified or exempt from declassification and multiclassification for reasons for exemption. | The data used to train the model are cables from 1995-1999 that have completed manual review with metadata on decisions from manual declassification review. Additional data includes classification/declassification guides and associated glossaries to improve model performance. Performance evaluation is measured by a human Quality Control reviewer. | No | k) None of the above | Yes | a) Yes | The model could incorrectly predict to declassify a document. The model could predict to exempt a document that should have been declassified, reducing public visibility. All exempted documents are reviewed by a human. | d) In-progress | a) Yes, sufficient monitoring protocols have been established | a) Yes, sufficient and periodic training has been established | a) Yes | a) Yes, an appropriate appeal process has been established | a) Direct usability testing | ||||||
| Department Of State | BP | BudgetChat AI Tool | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Generative AI | Inputting large amounts of data from paper forms into a digital system using AI. | Time and cost savings, improved ability to identify crucial supplemental information available in other budget documents, improved accuracy in identifying key budget information in individual documents. | BudgetChat responds to prompts regarding the amount of spend and positions in prior years by combing through narratives presented to the Bureau of Budget and Planning (BP) from other Department's bureaus, federal agencies, and Congress. | 09/12/2024 | b) Developed in-house | Yes | BudgetChat responds to prompts regarding the amount of spend and positions in prior years by combing through narratives presented to the Bureau of Budget and Planning (BP) from other Department's bureaus, federal agencies, and Congress. | No | k) None of the above | Yes | ||||||||||||||||
| Department Of State | CA | Evaluating Customer Feedback and Sentiments with AI | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Service Delivery | Pilot | c) Not high-impact | Not high-impact | Generative AI | To leverage Natural Language Processing (NLP) and secure Large Language Models (LLM) on unstructured text data to identify actionable insights to drive customer improvement initiatives. | Greater insights about user experiences with consular services and the impact of service changes. | Multiple outputs include categorization, summarization, and analysis of customer feedback. | 07/01/2024 | b) Developed in-house | Yes | Multiple outputs include categorization, summarization, and analysis of customer feedback. | Open-source customer feedback data and other data, including data collected through customer surveys and by in-house researchers. | No | k) None of the above | Yes | |||||||||||||||
| Department Of State | CA | Consular Affairs FaceVACS | a) Pre-deployment The use case is in a development or acquisition status. | Service Delivery | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | To automatically check passport photo quality during the Online Passport Renewal (OPR) process, providing instant feedback to ensure submitted images meet requirements. | Instant feedback to ensure submitted images meet requirements. | The output is a decision on whether to accept or reject the applicants digitally submitted biometric face image. Applicants are prompted to retake and upload new photos if needed to meet requirements or submit a physical photo through standard processes if an acceptable digital photo cannot be obtained. | The output is a decision on whether to accept or reject the applicants digitally submitted biometric face image. Applicants are prompted to retake and upload new photos if needed to meet requirements or submit a physical photo through standard processes if an acceptable digital photo cannot be obtained. | ||||||||||||||||||||||
| Department Of State | CA | Travel.State.Gov (TSG) Enhanced Search and Chatbot | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | CA | Travel.state.gov (TSG) Content Refinement with AI Text Editor | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | CA | Predictive Analytics Platform | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | CA | Innovation and Transformation Measurement and Prediction | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | CA | CodeGen - AI-assisted IT Application Development | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | CGFS | Within Grade Increase Data Extraction Automation | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Moving large amounts of data from paper and/or digital forms into a digital system can be challenging, time intensive, and costly. | Lower processing time and resources for cost savings. | Tabulated dataset of extracted values referred for human review. | c) Developed with both contracting and in-house resources | GCP | No | Tabulated dataset of extracted values referred for human review. | Mocked-up forms | No | j) Employment Status i) Income | Yes | |||||||||||||||
| Department Of State | CGFS | DS-5528 Promissory Note Automation | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Moving large amounts of data from paper and/or digital forms into a digital system is challenging, time intensive, and costly. | A reduction in processing time and needed resources to lower costs to complete tasks. | Dataset of extracted information referred for human review. | 12/08/2023 | c) Developed with both contracting and in-house resources | No | Dataset of extracted information referred for human review. | Mocked-up Promissory Notes | Yes | k) None of the above | Yes | |||||||||||||||
| Department Of State | CGFS | StateInsight | a) Pre-deployment The use case is in a development or acquisition status. | Procurement & Financial Management | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The need to summarize documents to produce a description of procurement awards and to allow authorized users to ask questions of the documents. | Time and cost savings through efficiency, improved contract management for better outcomes, and more continuity of contractor services. | Summary of documents to produce a description of procurement awards and responses to questions asked about the documents. | Summary of documents to produce a description of procurement awards and responses to questions asked about the documents. | ||||||||||||||||||||||
| Department Of State | CSO | Violence Against Civilians Model | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | DS | User and Entity Behavior Analytics (UEBA) | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | DT | Data.State Analytics and AI Funhouse | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Other | Data scientists require a cybersecure platform for data science experiments, learning, and workflow deployment. Funhouse provides secure and documented access to GenAI and ML tools in an authorized and auditable platform. | Provide cybersecure analytics and AI experimentation to support mission delivery and process efficiency, increased mission efficiency, accelerated AI innovation, and cost savings. | Funhouse is a platform that enables use of general purpose AI and ML tools, including open source, OpenAI, and Azure AI models allowing individual users to tackle a variety of business problems across the Department's mission. | 01/10/2025 | c) Developed with both contracting and in-house resources | Microsoft, Databricks, ZenPoint | Yes | Funhouse is a platform that enables use of general purpose AI and ML tools, including open source, OpenAI, and Azure AI models allowing individual users to tackle a variety of business problems across the Department's mission. | No | k) None of the above | Yes | |||||||||||||||
| Department Of State | DT | FOIA 360 AI Matching Tool | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | DT | Property and Procurement Analytics | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The need to expand analytics within the Integrated Logistics Management System (ILMS) through development of a machine learning model for detecting patterns of potential anomalous activity in property and procurement. | The identification and reduction of anomalous procurement activity in overseas posts. | Detection of anomalous activities within the Integrated Logistics Management System (ILMS). | Detection of anomalous activities within the Integrated Logistics Management System (ILMS). | ||||||||||||||||||||||
| Department Of State | DT | AI Accelerator | a) Pre-deployment The use case is in a development or acquisition status. | Administrative Functions | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The need to quickly discover answers to common questions about using AI at the Department, locate tools and services, and get help with new AI needs - including guidance and requirements, available tools and solutions, and related processes. | Enhanced user experience to accelerate AI use at the Department, operational efficiencies to free up time for more complex tasks, and scalability to quickly incorporate new knowledge and related tasks. | Textual responses, links to resources, and links to forms to submit information for follow-on action. | Textual responses, links to resources, and links to forms to submit information for follow-on action. | ||||||||||||||||||||||
| Department Of State | DT | AI Research Engine (AIRE) | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | The need to improve the quality of reporting from the continuously growing sources of information through the categorization, summarization, and translation of information. Includes formerly titled use case "J Reports Data Collection & Management Tool (DCT)." | AI enables the quick categorization, summarization, and translation of data to make it easily accessible for quicker drafting of higher quality reports, resulting in staff time savings, reduced redundant workload, and better information to support the mission. Significant reduction in time and costs required to create higher quality reports. | Summarized information, translated documents, and sorted data. | c) Developed with both contracting and in-house resources | Deloitte | Yes | Summarized information, translated documents, and sorted data. | No | k) None of the above | No | ||||||||||||||||
| Department Of State | DT | PFCS Proving Ground | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Other | The need for an experimentation platform to promote the development and adoption of AI solutions. | The ability for personnel and teams to experiment with developing innovative AI solutions to address mission delivery or back office processes. Efficiency and effectiveness. | Proofs of concept that can tackle a variety of business problems across the mission, and can be scaled to enterprise solutions. | 05/01/2025 | c) Developed with both contracting and in-house resources | Palantir | Yes | Proofs of concept that can tackle a variety of business problems across the mission, and can be scaled to enterprise solutions. | No | k) None of the above | No | |||||||||||||||
| Department Of State | DT | DT Data Analytics and Assessment (DAA) AI Use Case ITCP Data Harvest | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | DT | POA&M Orchestration | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | DT | StateChat | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | The need to safely and securely access, share, summarize, and research Sensitive But Unclassified (SBU) Department information. Improve and reduce time to translating documents, summarizing reports, and drafting emails. StateChat is the Department's enterprise Generative AI-powered chatbot. | Delivers greater operational efficiency and makes personnel better collaborators and competitors on the front lines of diplomacy, delivering decision advantages, negotiation preparation, and preparedness through simulation. Reduced costs and more consistent documents referencing SBU information. | StateChat is a chatbot interface enriched with tools that generate formatted paper products, offer rapid and transparent searches of internal documents, and allow for research, synthesis, and drafting of documents with reference to cables and other internal information, policies, and processes. | 03/04/2024 | a) Purchased from a vendor | Palantir, OpenAI | Yes | StateChat is a chatbot interface enriched with tools that generate formatted paper products, offer rapid and transparent searches of internal documents, and allow for research, synthesis, and drafting of documents with reference to cables and other internal information, policies, and processes. | N/A - There is no training or fine-tuning of the foundational model, but there is ongoing evaluation of the performance of the foundational model. | Yes | k) None of the above | Yes | ||||||||||||||
| Department Of State | DT | J Reports Data Collection & Management Tool (DCT) | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | F/FAO | Natural Language Processing (NLP) for Foreign Assistance Appropriations Analysis | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Service Delivery | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | Summarizing the key points of a lengthy report using AI. | The NLP application reduces the time needed to extract congressional directives from the annual appropriations bill which ultimately shortens the cycle time for generating the report detailing the annual allocation of U.S. foreign assistance funds to foreign countries and international organizations (i.e. the 653(a) report). | Consolidated congressional directives from the annual appropriations bill to be included in the report detailing the allocation of U.S. foreign assistance funds to foreign countries and international organizations (i.e. the 653(a) report). | 08/05/2021 | c) Developed with both contracting and in-house resources | Guidehouse | Yes | Consolidated congressional directives from the annual appropriations bill to be included in the report detailing the allocation of U.S. foreign assistance funds to foreign countries and international organizations (i.e. the 653(a) report). | Annual appropriations bills | No | k) None of the above | Yes | ||||||||||||||
| Department Of State | F/FAO | ForeignAssistance.gov Processing for Mismatched Data | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | F/FAO | FA.gov PII Picker | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | International Affairs | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The need to streamline a process, improve accuracy, and provide validation checks. The previous process to identify Personally Identifying Information (PII) required additional layers of manual verification. | Additional support to ensure data and privacy protections in public facing databases with less manual time and effort. | The identification of potential PII contained in data submissions. | c) Developed with both contracting and in-house resources | Guidehouse | No | The identification of potential PII contained in data submissions. | The PII Picker fine tuned the spaCy NER model using a custom dataset of PII. This custom dataset was created using the PII that DOS identified while reviewing financial data prior to publication on ForeignAssistance.gov. | Yes | l) Other | Yes | |||||||||||||||
| Department Of State | F/FAO | Integrated Country Strategy (ICS) Turbo | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | International Affairs | Deployed | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The need to conduct thematic analysis on large volumes of Integrated Country Strategy (ICS) data in a comprehensive and time efficient manner. | Increased capacity of personnel to identify lessons learned and best practices by leveraging historical documentation. This is intended to be accomplished by decreasing the time and effort spent reading, synthesizing, and categorizing the contents of historical Integrated Country Strategies (ICS). | Thematic categories that group ICS sub-objectives written over periods of time across all countries. | 06/11/2025 | c) Developed with both contracting and in-house resources | Guidehouse | No | Thematic categories that group ICS sub-objectives written over periods of time across all countries. | No | k) None of the above | Yes | |||||||||||||||
| Department Of State | F/FAO | FA.gov RedactAid | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | International Affairs | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The need to improve data quality of information and prevent sensitive data from being published to ForeignAssistance.gov. | Reduced effort and improved consistency and accuracy in protecting sensitive information from getting published in public-facing databases. | Sensitive information is identified and flagged prior to ForeignAssistance.gov publication. | c) Developed with both contracting and in-house resources | Guidehouse | No | Sensitive information is identified and flagged prior to ForeignAssistance.gov publication. | The RedactAid model was trained using historical unredacted ForeignAssistance.gov datasets. | No | k) None of the above | Yes | |||||||||||||||
| Department Of State | NFATC | Automatic Detection of Authentic Material | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The need to detect/identify authentic materials in target languages, reducing the time to develop language curricula and tests. | Reduced staff hours and improved variety of materials in foreign languages. | Authentic text, audio, and video in 8 foreign languages. | 11/10/2023 | b) Developed in-house | No | Authentic text, audio, and video in 8 foreign languages. | No | k) None of the above | Yes | ||||||||||||||||
| Department Of State | NFATC | Office of the Historian (OH) Historical Analysis for Negotiations | b) Pilot The use case has been deployed in a limited test or pilot capacity. | International Affairs | Pilot | c) Not high-impact | Not high-impact | Generative AI | The need for historical information in real time to assist with analysis by the Office of the Historian (OH) and decision making. | Negotiators save time and achieve information advantage related to historical country relationships. | Succinct 1-page overviews of research for US negotiators and their assistants. | 07/01/2025 | b) Developed in-house | No | Succinct 1-page overviews of research for US negotiators and their assistants. | References the Foreign Relations of the United States series | No | k) None of the above | No | |||||||||||||||
| Department Of State | NFATC | FSI Enterprise Operations - Gaming and Simulations | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | NFATC | Enhancing Training Effectiveness in FSILearn Using AI | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | NFATC | FSI Continuous Learning Solutions | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | GPA | Digital Media Analytics Platform | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | International Affairs | Deployed | c) Not high-impact | Not high-impact | Generative AI | Using open-source neural machine translation models to translate global media articles and Department and public foreign social media posts into English. | The use case reduces labor in producing media summary reports. | Summaries of large volumes of foreign language news and online posts to help teams identify and understand trends in a more efficient manner. | 03/01/2024 | b) Developed in-house | Yes | Summaries of large volumes of foreign language news and online posts to help teams identify and understand trends in a more efficient manner. | FLORES 200+ is used for evaluating translation models | No | k) None of the above | Yes | |||||||||||||||
| Department Of State | DT | Electronic Health Record AI Enhancements | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Health & Medical | Deployed | a) High-impact | High-impact | Generative AI | Provide efficient, accurate, and comprehensive medical electronic health record (EHR) management, patient care, and administrative workflows by leveraging AI-powered tools, including large language models (LLMs) and natural language processing (NLP). | Improved operational efficiency, reduction of errors, and greater focus on delivering quality patient care. Improved patient data and safety with cost savings and operational efficiencies, transparency, and security. Enhanced clinical decision-making by summarizing patient information, identifying discrepancies, and generating referrals based on historical data and current inputs. | Automated data extraction, validation, sentiment analysis, categorization, identification of document types, data discrepancies, and text summarization into structured medical chart components. | 03/03/2025 | c) Developed with both contracting and in-house resources | Palantir | Yes | Automated data extraction, validation, sentiment analysis, categorization, identification of document types, data discrepancies, and text summarization into structured medical chart components. | MED-defined policy and definitions | Yes | https://www.state.gov/wp-content/uploads/2024/08/MED-PLTR-MED-PIA-for-Public-Facing-Site.pdf | b) Sex c) Age | Yes | a) Yes | https://www.state.gov/wp-content/uploads/2024/08/MED-PLTR-MED-PIA-for-Public-Facing-Site.pdf | Pending AI Impact Assessment | d) In-progress | a) Yes, sufficient monitoring protocols have been established | a) Yes, sufficient and periodic training has been established | a) Yes | b) Not applicable | a) Direct usability testing | ||||
| Department Of State | MGT | Rosie Chat Bot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Human Resources | Pilot | c) Not high-impact | Not high-impact | Generative AI | The need for employees to locate consistent, accurate answers to HR-related questions quickly based on internal information. | Cost savings, reduced wait times, improved access to information, enhanced efficiency, scalability, and consistency in communication. | Contextual answers and emails. | 09/02/2025 | b) Developed in-house | Yes | Contextual answers and emails. | No | k) None of the above | No | ||||||||||||||||
| Department Of State | MGT | Utility Invoices Data Extraction | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Administrative Functions | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The need to reconcile and process payments faster. | Reduced time and labor costs through a more efficient and accurate process that automatically identifies key information to issue payments. | Key details from utility invoices, such as the account name, amount due, and consumption for processing payments and for preparing the monthly consumption reports. | 10/04/2023 | b) Developed in-house | No | Key details from utility invoices, such as the account name, amount due, and consumption for processing payments and for preparing the monthly consumption reports. | Historical bills. | No | k) None of the above | No | |||||||||||||||
| Department Of State | MGT | Database for Arrivals | a) Pre-deployment The use case is in a development or acquisition status. | Human Resources | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Automating the extraction of personnel information to create a centralized database for arrivals and departures. | Enhances collaboration and coordination among management sections, streamlining operations. | Centralized list or excel sheet database of personnel information. | Centralized list or excel sheet database of personnel information. | ||||||||||||||||||||||
| Department Of State | MGT | AI SharePoint Chatbot | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | The need for efficient access to post-specific information. | Improves productivity, engagement, and access to post-specific information. | Answers to post-specific questions via a chatbot integrated into the Embassy SharePoint site. | Answers to post-specific questions via a chatbot integrated into the Embassy SharePoint site. | ||||||||||||||||||||||
| Department Of State | MGT | Databricks Code Assistant | a) Pre-deployment The use case is in a development or acquisition status. | Information Technology | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | Determining potential billing problems for Department mobile plans can be time consuming and prone to errors. | Improve efficiency, save costs, and save time conducting analyses of mobile plans, bills, and usage. | A data usage report. | A data usage report. | ||||||||||||||||||||||
| Department Of State | PM | Natural Language Processing (NLP) to pull key information from unstructured texts | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | WHA | Walter: Generative AI Support Bot | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | The need to more effectively focus on critical customer requests with existing resources. | Cost savings and reduced customer wait times by addressing Tier 0 and Tier 1 requests without human intervention. | Responses to Tier 0 and Tier 1 requests via a virtual customer service agent based on current management, policies, directives, and other internal resources. | 03/06/2025 | a) Purchased from a vendor | Microsoft | Yes | Responses to Tier 0 and Tier 1 requests via a virtual customer service agent based on current management, policies, directives, and other internal resources. | Yes | k) None of the above | No | |||||||||||||||
| Department Of State | ECA | ECA Program Management and Outreach - Summarization | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | R/GEC | Storyzy | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | GPA | AI Tools to Enhance PD Workflows | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | CSO | Mass Mobilization Model | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | CSO | Senturion Alpha | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | WHA | WHA/EX Information Management | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Information Technology | Deployed | c) Not high-impact | Not high-impact | Generative AI | Knowledge Management for administrative and IT related processes. Formerly known as "Low Earth Orbit (LEO) Budget Office Inquiries" to answer questions frequently asked about budget. | Cost savings and reduced customer wait times. | Generative AI or logic-based responses with answers to administrative/IT related FAQs. | 03/03/2025 | c) Developed with both contracting and in-house resources | Microsoft | Yes | Generative AI or logic-based responses with answers to administrative/IT related FAQs. | WHA/EX SharePoint Data | Yes | k) None of the above | No | ||||||||||||||
| Department Of State | NFATC | Creating Persistent Virtual Reality Personas for Dynamic Training | a) Pre-deployment The use case is in a development or acquisition status. | Other | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | There is a need to improve effectiveness of immersive training in the future. The team develops a dynamic integration between multiple AI tools to create stored/persistent "personas" that can be interacted with during training-related virtual reality (VR) exercises. | Natural conversation-based simulations and more effective training scenarios. | Persistent Personas; Speech-to-text API; Language Translation | Persistent Personas; Speech-to-text API; Language Translation | ||||||||||||||||||||||
| Department Of State | CA | Translation of Consular Content using AI | d) Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | ||||||||||||||||||||||||||||
| Department Of State | DT | TIP Report Research Translation | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Generative AI | The TIP Report Translation AI system provides informal, unofficial translations for materials related to the annual Trafficking in Persons report. This capability saves staff time and allows researchers and drafters to focus on critical report needs, such as identifying and resolving data gaps. | This capability saves staff time and allows researchers and drafters to focus on critical report needs, such as identifying and resolving data gaps. | Informal, unofficial translations of PDFs and Microsoft word documents. Each document includes a watermark denoting that the AI translation is unofficial and must be reviewed by a human. | 11/01/2023 | c) Developed with both contracting and in-house resources | Deloitte, AzureAI | Yes | Informal, unofficial translations of PDFs and Microsoft word documents. Each document includes a watermark denoting that the AI translation is unofficial and must be reviewed by a human. | We do not train nor fine-tune the foundational model, but there is ongoing evaluation of the performance of the foundational model. | No | k) None of the above | No | ||||||||||||||
| Department Of State | DS | Diplomatic Security - Legal Instruction Unit | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Law Enforcement | Pilot | c) Not high-impact | Not high-impact | Generative AI | Insufficient personnel and resources available to create multimedia, conduct necessary research, and develop required curriculum. | Cost savings due to increased efficiency and decreased contracts/personnel to meet the needs of the Department. | The various AI systems will be used to synthesize information and provide recommendations on curriculum content, such as scenario development, multimedia content, and character script recommendations. | 08/01/2025 | c) Developed with both contracting and in-house resources | LexisNexis, ChatGPT | Yes | The various AI systems will be used to synthesize information and provide recommendations on curriculum content, such as scenario development, multimedia content, and character script recommendations. | Publicly available (legal research and related materials). | No | k) None of the above | No | ||||||||||||||
| Department Of State | CA | NIV Adjudication Review Recommendation Engine (ARRE) | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Administrative Functions | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | ARRE is used to make post-adjudication managerial quality assurance review time more efficient. The current required managerial review of cases includes a randomly selected portion, which is not optimized to help managers allocate time toward cases where secondary review is most likely to surface documentation gaps, policy-complex adjudications, or process compliance issues. As a result, managers may spend review capacity on routine cases while missing opportunities to identify coaching needs or quality issues within the fixed review window. | The primary purpose of ARRE is to support the post-adjudication, managerial quality assurance review workflow by helping managers efficiently meet existing review requirements through queue ordering and prioritization. This provides more efficient use of manager review time and more consistent post-adjudication oversight by better targeting the fixed, required managerial review effort toward cases with atypical attributes that warrant a second look for quality assurance purposes. The expected benefits are improved review consistency and improved documentation of decision-making. | ARRE produces a unitless anomaly score (01) for each already-adjudicated case and uses the score to generate a post-level ranked list that bins cases into priority tiers (e.g., high/medium/low) for managerial review. Outputs are advisory: managers may override or disregard the prioritization and may review any case consistent with existing authorities and review requirements. The output is used to help order the managerial quality assurance workload; it is not a decision output and is not used as an applicant risk determination. ARRE is used only after an adjudication is complete to support internal managerial oversight and does not serve as a basis for any visa eligibility determination or other binding action affecting the applicant. ARRE does not make, recommend, or change visa issuance/refusal decisions; all adjudicative determinations remain the responsibility of U.S. government officials. | 01/01/2025 | c) Developed with both contracting and in-house resources | Guidehouse | No | ARRE produces a unitless anomaly score (01) for each already-adjudicated case and uses the score to generate a post-level ranked list that bins cases into priority tiers (e.g., high/medium/low) for managerial review. Outputs are advisory: managers may override or disregard the prioritization and may review any case consistent with existing authorities and review requirements. The output is used to help order the managerial quality assurance workload; it is not a decision output and is not used as an applicant risk determination. ARRE is used only after an adjudication is complete to support internal managerial oversight and does not serve as a basis for any visa eligibility determination or other binding action affecting the applicant. ARRE does not make, recommend, or change visa issuance/refusal decisions; all adjudicative determinations remain the responsibility of U.S. government officials. | ARRE was trained on a random sample of NIV application data from 2021 onward drawn from the Consular Consolidated Database. Performance was evaluated through controlled tests and pilot reviews at multiple posts, where managers compared ARRE-selected cases against randomly selected cases and consistently found that ARREs recommendations yielded a higher proportion of cases warranting managerial review. | Yes | b) Sex c) Age | Yes | ||||||||||||||
| Department Of State | CA | Live Consular AI Language Augmentation (LCALA) - Visa Interview Pilot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Other | Pilot | a) High-impact | High-impact | Natural Language Processing (NLP) | Language gaps between applicants and non-native-speaker adjudicators create inconsistent, ad-hoc translation during brief (~3-minute) interviews or is limited to questions the adjudicator is familiar with asking, increasing the risk of misunderstanding (dialect/register, pronouns, named entities) and forcing repeat questions, delays, or uneven outcomes. Interpreter availability is limited, and reliance on bilingual staff is not scalable to demand and will impact other consular functions. These constraints reduce interview efficiency, strain officer workload, and can erode customer experience and perceived equity. | More consistent comprehension in ~3-minute interviews; Improved efficiency and throughput; Equity and customer experience; Reduced interpreter burden; Operational resilience | LCALA provides real-time transcription and neural machine translation of spoken exchanges at the interview window, delivering translated audio and on-screen text on the device; when enabled, it can generate a brief time-stamped transcript of the conversation. | a) Purchased from a vendor | Microsoft | No | LCALA provides real-time transcription and neural machine translation of spoken exchanges at the interview window, delivering translated audio and on-screen text on the device; when enabled, it can generate a brief time-stamped transcript of the conversation. | LCALA uses Microsofts vendor-managed Speech-to-Text and Neural Machine Translation models, which are trained and fine-tuned on large, proprietary multilingual speech/text corpora and evaluated with standard metrics (e.g., WER for speech; BLEU/ChrF/COMET with human review for translation). No Department of State audio or transcripts are used to train or fine-tune these models (no-trace processing). For this pilot, the team performs limited operational Quality Assurance (e.g., sampled named-entity accuracy, latency, officer re-ask rates) to evaluate performance in the visa-interview context. | No | k) None of the above | No | |||||||||||||||
| Department Of State | CA | AI Programmatic Insights | c) Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Other | Deployed | c) Not high-impact | Not high-impact | Generative AI | Regional Directors (RD) have to analyze current passport trends and data to determine overall performance and resource requirements to enable successful alignment to overall performance targets; however, the raw data requires analysis and doesn't currently allow for easy input to evaluate different scenarios, which results in RDs having to respond to passport performance with limited information. | An AI-driven solution that empowers Regional Directors to forecast programmatic trends, anticipate demand cycles, and respond effectively to external factors. This solution empowers Regional Directors with data-driven insights to anticipate demand, optimize resources, and proactively address challenges impacting passport agencies. | Monthly snapshots, overview of historical trends, agency-level overview analysis, and agency specific highlights. | b) Developed in-house | No | Monthly snapshots, overview of historical trends, agency-level overview analysis, and agency specific highlights. | Staffing data, Overtime data, and Age Tracker Report data | No | k) None of the above | Yes | ||||||||||||||||
| Department Of State | CA | RegScale | a) Pre-deployment The use case is in a development or acquisition status. | Cybersecurity | Pre-deployment | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | The AI in RegScale is intended to reduce the manual workload, complexity, and latency in compliance management. It automates the extraction and initial drafting of regulatory documentation, identifies control gaps, and provides plain-language explanations of requirements. The goal is to improve efficiency, accuracy, and continuous audit readiness while enabling small teams to scale compliance across multiple regulatory frameworks. | The expected benefits of RegScales AI include reduced compliance costs, faster audit readiness, improved accuracy of regulatory reporting, and more efficient use of agency staff resources. For the general public, this translates into stronger protection of sensitive data, improved transparency, and quicker delivery of secure government services. | The AI system in RegScale retrieves compliance documentation, control gap analyses, plain-language explanations of regulations, policy-to-control mappings, and continuous compliance reports. These outputs are designed to reduce manual workload, accelerate audit readiness, and provide real-time visibility into an agencys compliance posture. | The AI system in RegScale retrieves compliance documentation, control gap analyses, plain-language explanations of regulations, policy-to-control mappings, and continuous compliance reports. These outputs are designed to reduce manual workload, accelerate audit readiness, and provide real-time visibility into an agencys compliance posture. | ||||||||||||||||||||||
| Department Of State | CA | Live Consular AI Language Augmentation (LCALA) - OCS/ACS Pilot | b) Pilot The use case has been deployed in a limited test or pilot capacity. | Emergency Management | Pilot | c) Not high-impact | Not high-impact | Natural Language Processing (NLP) | LCALA OCS/ACS Pilot aims to close language gaps in routine, time-sensitive citizen services by providing on-demand interpretation for calls and in-person interactions. Today, limited interpreter availability and uneven reliance on bilingual staff can cause delays, confusion, and repeat contacts when conveying guidance or coordinating with local authorities. LCALA seeks to improve clarity and timeliness of these communications, reducing callbacks and handoffs, while remaining assistive only and not replacing certified interpreters where required. | Faster, clearer OCS/ACS communications; Fewer repeat contacts and escalations; Service equity and accessibility; Staff efficiency; Interagency coordination. | LCALA provides real-time transcription and neural machine translation of spoken exchanges on demand, delivering translated audio and on-screen text on the device; when enabled, it can generate a brief time-stamped transcript of the conversation. | 01/01/2025 | a) Purchased from a vendor | Microsoft | No | LCALA provides real-time transcription and neural machine translation of spoken exchanges on demand, delivering translated audio and on-screen text on the device; when enabled, it can generate a brief time-stamped transcript of the conversation. | LCALA uses Microsofts vendor-managed Speech-to-Text and Neural Machine Translation models, which are trained and fine-tuned on large, proprietary multilingual speech/text corpora and evaluated with standard metrics (e.g., WER for speech; BLEU/ChrF/COMET with human review for translation). No Department of State audio or transcripts are used to train or fine-tune these models (no-trace processing). For this pilot, the team performs limited operational Quality Assurance (e.g., sampled named-entity accuracy, latency, officer re-ask rates) to evaluate performance in the visa-interview context. | No | k) None of the above | No | ||||||||||||||
| Department Of The Interior | ONRR | DOI-0270 | Data Extraction Using MS Power Automate AI Functionality [2024 INV#WO0000000110500] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | ONRR | DOI-0269 | Liable Party Research [2024 INV#WO0000000110496] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | OS | DOI-0268 | I-NEPA System: Leveraging Artificial Intelligence (AI) for Enhanced Efficiency [2024 INV# WO0000000111250] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | OS | DOI-0267 | Public Comment Analysis Tool (PCAT): Leveraging Artificial Intelligence (AI) for Enhanced Efficiency [2024 INV#WO0000000106351] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | NPS | DOI-0266 | Use of AI to Enhance Flash Flood Forecast Tool [2024 Inv#WO0000000110323] | Pre-deployment The use case is in a development or acquisition status. | Emergency Management | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | There is an opportunity to better predict rainfall on a watershed scale in Great Smoky Mountains National Park and provide forecasts of flooding events with a goal of a 24+ hour lead time. | Once we can implement and use the flood forecasting app, we anticipate being able to use it proactively to close at-risk sections of the park during forecast flooding events, saving lives and reducing risks to first responders. | Improved flood forecasts. | FALSE | Improved flood forecasts. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | NPS | DOI-0265 | Bird Nest Detection [2024 Inv#WO0000000110506] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Monitoring of colonial nesting birds with manual photo processing takes substantial time and effort. The goal is to identify bird nests with an object detection model. | Researchers will be able to monitor bird colonies and their populations more efficiently and consistently. | Assessments of active nests for bird species. | FALSE | Assessments of active nests for bird species. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0264 | Remote Sensing Coastal Change - Shoreline Change | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Rapid classification of satellite imagery to determine shoreline change | Quicker, more accurate identification of risks to public safety and infrastructure due to erosion or other shoreline changes. | Predictions of shoreline change | FALSE | Predictions of shoreline change | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BSEE | DOI-0260 | ROV Smart Touch Subsea Pipeline Inspections | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Other | The lack of efficiency and or capability for BSEE and the Oil and Gas industry to inspect bolt failures for underwater pipelines. | Potentially enhance subsea pipeline inspections by integrating advanced robotics and machine learning technologies. | Bolt and flange tightness level prediction | FALSE | Bolt and flange tightness level prediction | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | BSEE | DOI-0259 | Well Risk Assessment [2024 Inv# WO0000000108776] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | BSEE | DOI-0258 | Sustained Casing Pressure Identification [2024 Inv# WO0000000108777] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | BSEE | DOI-0257 | Level 1 Survey Report Corrosion Level Classification | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Computer Vision | Offshore operators conduct Level 1 surveys annually to report on platform structural integrity, as mandated by 30 CFR 250.901(a)(7), and submit these surveys to BSEE. Each survey includes a corrosion assessment of the platform with accompanying photos. Each area is assigned a coating grade and are key indicators of a platforms overall structural health. Currently, BSEE manually reviews each report to determine if a platform requires further audits, a process that is both time and labor intensive. | Support a more efficient and accurate review of Level 1 Survey photo corrosion levels | The outputs will include a comparison between the original corrosion level assigned to each image in the Level 1 Survey and the corresponding level determined by the machine learning algorithm. | Developed with both contracting and in-house resources | NASA | FALSE | The outputs will include a comparison between the original corrosion level assigned to each image in the Level 1 Survey and the corresponding level determined by the machine learning algorithm. | FALSE | FALSE | ||||||||||||||||
| Department Of The Interior | BSEE | DOI-0256 | Well Activity Report Classification | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Researching the use of Masked Language Models and Convolutional deep neural networks to identify classification systems for significant well event using data from Well Activity Reports. | Enable quicker detection of significant well events to help BSEE personnel mitigate risks and address issues more efficiently. | Decision on what type of significant event a well activity report should report. | Developed with both contracting and in-house resources | FALSE | Decision on what type of significant event a well activity report should report. | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | BSEE | DOI-0255 | Autonomous Drone Inspections | Pre-deployment The use case is in a development or acquisition status. | Energy & the Environment | Pre-deployment | c) Not high-impact | Not high-impact | Other | The Bureau of Safety and Environmental Enforcement within the Department Of Interior, is requesting a trade study of the feasibility of autonomously inspecting off-shore facilities that are non-boardable due to various hazards. These inspections are currently performed by inspectors at a stand off distance from the non-boardable facility on board boats, helicopters, or from land. The distance between the personnel and the facility reduces the quality of inspections that are possible for non-boardable facilities. | Increase the inspection capabilities and efficiency of inspections through the use of small autonomous uncrewed aerial systems (sUAS). | Multiple outputs for inspections, including decisions on corrosion levels, if a platform is boardable, does it have methane leaks. | Developed with both contracting and in-house resources | FALSE | Multiple outputs for inspections, including decisions on corrosion levels, if a platform is boardable, does it have methane leaks. | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | FWS | DOI-0254 | Development of a computer vision model to monitor for early detection of habitat loss across the landscape. [2024 INV#DOI-65] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Other | The U.S. Fish and Wildlife Service is planning to partner with the Chesapeake Conservancy, an NGO, to develop a computer vision model to monitor for early detection of habitat loss across the landscape, a significant threat to biodiversity, including threatened and endangered species. By using a computer vision model one can rapidly identify and flag areas where habitat loss may be occurring due to natural or human-caused disturbances. Early detection can facilitate rapid responses, when appropriate, or allow practitioners to accurately calculate habitat loss over time. More accurate estimates of habitat loss allow for better management decisions and potentially shorter recovery times for threatened and endangered species. | AI-powered computer vision enables early detection of habitat loss, allowing faster, more accurate responses to threats. This supports better conservation decisions and helps protect and recover threatened and endangered species. | AI-powered computer vision enables early detection of habitat loss, allowing faster, more accurate responses to threats. This supports better conservation decisions and helps protect and recover threatened and endangered species. | FALSE | AI-powered computer vision enables early detection of habitat loss, allowing faster, more accurate responses to threats. This supports better conservation decisions and helps protect and recover threatened and endangered species. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | IBC | DOI-0253 | Intelligent Optical Character Recognition [2024 INV#WO0000000110733] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | ONRR | DOI-0252 | Image Generation and Audio Video Editing [2024 Inv#WO0000000110551] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | ONRR | DOI-0251 | Machine Learning Model Optimization[2024 INV#WO0000000110563] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | ONRR | DOI-0250 | Video Creation and Editing[2024 Inv#WO0000000110494] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | ONRR | DOI-0249 | ONRR Video Hosting Platform [OVHP][2024 Inv#WO0000000110488] | Retired The use case was reported in the agencys prior years inventory, but its development and/or use has since been discontinued. | Retired | c) Not high-impact | Not high-impact | FALSE | FALSE | FALSE | ||||||||||||||||||||||||
| Department Of The Interior | USGS | DOI-0248 | VoiceAtlas no-code chatbot framework | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Agentic AI | Many business areas in USGS could benefit from AI tools, however do not have technical expertise on staff. | VoiceAtlas is an out of the box no code solution for non-technical employees. | Chatbots with guardrails and knowledge bases that can be created and maintained by non-technical employees. | Purchased from a vendor | Navteca | FALSE | Chatbots with guardrails and knowledge bases that can be created and maintained by non-technical employees. | product is being tested with public knowledgbases, such as Library Guide documents | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0247 | Automated Walrus Haulout Monitoring [2024 INV#WO0000000110052] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | provide a framework for using pre-trained image classification convolutional neural network CNN models to make predictions on unlabled image datasets to provide data for further analysis of walrus (Odobenus rosmarus) coastal haulout occupation | time savings, reduce manual image review | determining the presence and absence of walruses at hauout locations via remote camera traps | Developed in-house | FALSE | determining the presence and absence of walruses at hauout locations via remote camera traps | camera trap imagery | https://www.sciencebase.gov/catalog/ | FALSE | None of the Above | FALSE | https://code.usgs.gov/ | |||||||||||||
| Department Of The Interior | USGS | DOI-0246 | ChatGPT to write Python scripts for ArcGIS Pro Maps to be CVD-Friendly [2024 INV#WO0000000107402] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Generative AI | automate the process of changing colors of planetary geologic map units within ArcGIS pro so that they are all color-vision deficiency friendly | time savings | colors of planetary geologic map units within ArcGIS pro so that they are all color-vision deficiency friendly | Developed in-house | FALSE | colors of planetary geologic map units within ArcGIS pro so that they are all color-vision deficiency friendly | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0245 | Machine Learning approach to predict the composition of seafloor massive sulfide deposits [2024 INV#WO0000000108420] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | predict the composition of seafloor massive sulfide deposits | time and cost savings | publications | Developed in-house | FALSE | publications | geochemical data | FALSE | None of the Above | FALSE | |||||||||||||||
| Department Of The Interior | USGS | DOI-0244 | Harmful Algal Bloom prediction and detection system for Williams Fork Reservoir [2024 INV#WO0000000184339] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We have used remote sensing products to detect harmful algal blooms throughout the Upper Colorado Basin. However, there have been multiple algal blooms in the Williams Fork Reservoir that have remained undetected. | leverage field data collected with satellite overpasses to tease out what may be causing this discrepancy. We want to leverage AI/ML to see if we can build a new or improve existing models to boost the signal in this high-altitude reservoir | potential drivers (wind, nutrients, cloud cover) to potentially predict (based on antecedent conditions) when and where new blooms will occur | Developed in-house | FALSE | potential drivers (wind, nutrients, cloud cover) to potentially predict (based on antecedent conditions) when and where new blooms will occur | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0243 | Google Cloud Vision [2024 INV#WO0000000154445] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Computer Vision | USGS Water Mission Area needs text extraction from publicly available topographic maps as a callable function of an application. | time savings, cost savings | text data extracted from topographic maps | 02/03/2025 | Purchased from a vendor | FALSE | text data extracted from topographic maps | publicly available topographic maps | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0242 | Google Vertex AI Document workbench [2024 INV#WO0000000154393] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Computer Vision | The USGS Energy Resource Program needs to use Google Vertex AI document workbench to perform data rescue on some old paper USGS publication tables with oil and gas production data from the 1940s-1980. This data is not available in digital form. | data rescue | digital data extracted from paper publication - oil and gas production data from 1940s - 1980s | 02/03/2025 | Purchased from a vendor | FALSE | digital data extracted from paper publication - oil and gas production data from 1940s - 1980s | oil and gas production data from 1940s - 1980s | https://pubs.usgs.gov/publication/25181 | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0241 | USGS Azure OpenAI ChatGPT [2024 INV#WO0000000154392] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Various groups within USGS need a generic ChatGPT API to call from IDEs and various applications. | Cost savings, USGS sought to implement its own pay-as-you-go model in the DOI Azure tenant. | TBD | 02/03/2025 | Purchased from a vendor | OpenAI, Microsoft | FALSE | TBD | none, we are using the generic GPT 4.0 model from OpenAI | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0240 | Use modern AI/ML approaches to gain insight into USGS unstructured data such as text [2024 INV#WO0000000131706] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | There are a lot of unstructured data within USGS. Current efforts are to manually extract and analyze that data. We want to apply modern AI/ML approaches such as RAG (Retrieval-Augmented Generation) to gain insights from that unstructured data. | gain insights from that unstructured data | Develop AI/ML approaches such as RAG (Retrieval-Augmented Generation) to gain insights from that unstructured data | Purchased from a vendor | FALSE | Develop AI/ML approaches such as RAG (Retrieval-Augmented Generation) to gain insights from that unstructured data | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0239 | Parsing large quantities of text data [2024 INV#WO0000000113692] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The goal is to assess how news sources discuss water events, how this relates to water availability and vulnerability, and how these patterns vary over time/space. | time savings, without AI this would require reading through thousands of news articles to extract relevant information including mention of a specific the hazard (e.g., drought, flood, HABs), geographic region, organization, and other topical keywords. | identify noteworthy water events in the Upper Colorado River Basin from news articles | Purchased from a vendor | FALSE | identify noteworthy water events in the Upper Colorado River Basin from news articles | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0238 | PAWSC Ecotoxicology PFAS Machine Learning [2024 INV#WO0000000112908] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | assess the ecological health risk of PFAS in Pennsylvania stream surface water | predict potential PFAS exposure effects in unmonitored stream reaches | Leveraging a tailored convolutional neural network (CNN), a validation accuracy of 78% was achieved, directly outperforming traditional methods that were also used, such as logistic regression and gradient boosting (accuracies of 65%) | 12/09/2024 | Developed in-house | FALSE | Leveraging a tailored convolutional neural network (CNN), a validation accuracy of 78% was achieved, directly outperforming traditional methods that were also used, such as logistic regression and gradient boosting (accuracies of 65%) | PFAS concentrations in environmental waters, specifically streams for this model. | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0237 | Prioritized Constituents: Sediment [2024 INV#WO0000000109726] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Regional prediction of suspended sediment concentration in unmonitored rivers to characterize sediment transport in the Delaware, Illinois, and Colorado River Basins. | ability to characterize sediment transport in the Delaware, Illinois, and Colorado River Basins | prediction of suspended sediment concentration in unmonitored rivers | 10/02/2023 | Developed in-house | FALSE | prediction of suspended sediment concentration in unmonitored rivers | Climate, Hydrologic Data, Land Use, Terrain Elevation | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0236 | Avian population estimates from passive acoustic monitoring [2024 INV#WO0000000109725] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Reliable estimates of avian abundance from acoustic recordings | improved estimates of avian abundance | estimates of avian abundance | FALSE | estimates of avian abundance | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0235 | FEMA mixed population flood-frequency analysis [2024 INV#WO0000000109723] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Classification of historical floods based on causal mechanisms to support improved estimation of flood reoccurrence intervals | improved estimation of flood reoccurrence intervals | Classification of historical floods based on causal mechanisms | FALSE | Classification of historical floods based on causal mechanisms | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0234 | Data-Driven Streamflow Drought [2024 INV#WO0000000109714] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | streamflow drought forecasts | ability to provide drought forecasts using data-driven, machine learning approaches for USGS gage locations across the continental U.S. | drought forecasts | 10/03/2022 | Developed in-house | FALSE | drought forecasts | Climate, earth science, land use | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0233 | National-Extent Groundwater Quality Prediction for the National Water Census and Regional Integrated Water Availability Assessments [2024 INV#WO0000000109709] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | provide Nationally consistent predictions of groundwater quality (salinity and nutrients) relevant for human and ecological uses and its influence on surface-water | Nationally consistent predictions of groundwater quality can be integrated into comprehensive water-availability assessments including the National Water Census and regional Integrated Water Availability Assessments | predictions of groundwater quality and integration into comprehensive water-availability assessments including the National Water Census and regional Integrated Water Availability Assessments | 10/01/2021 | Developed in-house | FALSE | predictions of groundwater quality and integration into comprehensive water-availability assessments including the National Water Census and regional Integrated Water Availability Assessments | Earth Science, Land Use, Climate, Water Quality, Population Density | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0232 | Use of artificial intelligence tools for optimization and documentation for computer codes [2024 INV#WO0000000109681] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Generative AI | computer codes are needed that implement earthquake rupture forecasts and ground-motion models. This project uses ChatGPT to suggest optimizations and documentation for computer codes. | time savings | documentation, optimized code | 10/02/2023 | Developed with both contracting and in-house resources | OpenAI | FALSE | documentation, optimized code | computer codes | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0231 | Mapping sagebrush from drones to satellites [2024 INV#WO0000000109568] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | accurate maps of sagebrush are needed to identify seasonal habitats of sage-grouse for the Bureau of Land Management | extend presence modeling to map fractional cover of sagebrush in the Dakotas | accurate maps of sagebrush to identify seasonal habitats of sage-grouse | FALSE | accurate maps of sagebrush to identify seasonal habitats of sage-grouse | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0230 | Delineating sub-surface drainage using satellite imagery [2024 INV#WO0000000109525] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Knowing subsurface drainage (tile-drain) extent is integral to understanding how landscapes respond to precipitation events and subsequent days of drying, as well as how soil characteristics and land management influence stream response. | a time series of tile-drain extent would inform one aspect of land management that complicates our ability to explain streamflow and water-quality as a function of climate variability or conservation management | time series of tile-drain extent | Developed in-house | FALSE | time series of tile-drain extent | Satellite imagery, soils data | FALSE | None of the Above | FALSE | |||||||||||||||
| Department Of The Interior | USGS | DOI-0229 | Vegetation mapping on the Hawaiian island of Lanai [2024 INV#WO0000000109501] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | accurately classify plant species across the Hawaiian island of Lanai, producing detailed maps that can support conservation planning and monitoring of both native and invasive species | accurately classify plant species | detailed maps that support conservation planning and monitoring of both native and invasive species | 03/07/2024 | Developed in-house | FALSE | detailed maps that support conservation planning and monitoring of both native and invasive species | Digital Globe WorldView-2 satellite imagery; airborne imagery collected by EagleView | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0228 | Reinforcement Learning for Helmholtz Coil Operation and Simulation [2024 INV#WO0000000109497] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | optimize performance of its magnetic observatories | reinforcement learning (RL) can significantly aid in the operation of a Helmholtz coil by optimizing its performance in generating uniform magnetic fields | optimized performance in generating uniform magnetic fields | FALSE | optimized performance in generating uniform magnetic fields | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0227 | Population and critical habitat modeling of overwintering monarch butterflies [2024 INV#WO0000000109410] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Monarch butterflies in the western United States overwinter at very specific locations across coastal California. As monarch population decline it become important to identify the characteristics of what makes an overwintering grove a suitable habitat. | Understanding the land cover and climatic factors that influence site selection by monarch can aid land managers in both making decisions to support existing critical habitat, and identify previously unknown locations where monarchs overwinter | Characteristics of what makes an overwintering grove a suitable habitat. | 10/01/2023 | Developed in-house | FALSE | Characteristics of what makes an overwintering grove a suitable habitat. | High resolution land cover data, population abundance data, regional climate data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0226 | Machine Learning algorithm for stream velocity prediction [2024 INV#WO0000000109319] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | time-of-travel web-based application that will allow users to estimate travel times in a spill response scenario with greater accuracy | more accurate predictions of travel times in a spill response scenario | travel time estimates during a spill response scenario | Developed in-house | FALSE | travel time estimates during a spill response scenario | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0225 | Automated otolith aging using image processing [2024 INV#WO0000000109315] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Fisheries managers and researchers often need to know the age of fish for population estimates, stock assessment, and similar projects. Fish otoliths (an ear bone) often accumulated rings annual (similar to tress). | automate this process to see if we can reduce variability across individual agers and automate the aging process of counting otolith rings possibly saving time | automated aging process | 10/01/2023 | Developed in-house | FALSE | automated aging process | Otolith images (pictures) with known ages | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0224 | Machine learning for tsunami source zones [2024 INV#WO0000000109313] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | State of the art tsunami hazard analysis for coastal communities and infrastructure is computationally demanding. | computational efficiency | ML will be used to select the most representative source zones (among thousands of offshore earthquake ruptures) | 10/01/2024 | Developed in-house | FALSE | ML will be used to select the most representative source zones (among thousands of offshore earthquake ruptures) | Offshore fault slip rate data and historical seismicity | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0223 | Oceanographic, coastal, and geomorphic change analysis: data generation, QC/QA, and data management [2024 INV#WO0000000109310] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Machine learning to quantify coastal/marine change across broad scales. QC/QA processes in place to assess data robustness. | Verified data will be used by USGS projects for forecasting trends (ie, shorelines, role of permafrost) in a variety of coastal/marine settings for US coasts. | quantified coastal/marine change across broad scales and verified data for forecasting | 10/01/2024 | Developed in-house | FALSE | quantified coastal/marine change across broad scales and verified data for forecasting | Satellite, aerial and fixed camera imagery. Oceanographic and coastal time series data. | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0222 | Quantifying the effects of land-use change and bioenergy crop production on pollinators, wildlife, and ecosystem services in the Northern Great Plains [2024 INV#WO0000000109305] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Quantifying the effects of land-use change and bioenergy crop production on pollinators, wildlife, and ecosystem services in the Northern Great Plains | Ability to Quantifying the effects of land-use change and bioenergy crop production on pollinators, wildlife, and ecosystem services in the Northern Great Plains | Quantified effects of land-use change and bioenergy crop production on pollinators, wildlife, and ecosystem services in the Northern Great Plains | 10/01/2018 | Developed in-house | FALSE | Quantified effects of land-use change and bioenergy crop production on pollinators, wildlife, and ecosystem services in the Northern Great Plains | UAS images | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0221 | Computationally efficient emulation of spheroidal elastic deformation sources using machine learning [2024 INV#WO0000000109302] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | analytical models are fast but can be inaccurate as they do not correctly satisfy boundary conditions for many geometries, while numerical models are slow and may require specialized expertise and software | we trained supervised machine learning emulators (model surrogates) based on parallel partial Gaussian processes which predict the output of a finite element numerical model with high fidelity | output of a finite element numerical model with high fidelity | 01/02/2023 | Developed in-house | FALSE | output of a finite element numerical model with high fidelity | model output | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0220 | Wildlife species recognition and distance from camera estimation [2024 INV#WO0000000109245] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | need reliable population estimates of animal density | ability to obtain reliable population estimates of animal density | population estimates of animal density | Developed in-house | FALSE | population estimates of animal density | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0219 | Machine Learning to evaluate water quality [2024 INV#WO0000000109241] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Examining the effect of physicochemical and meteorological variables on water quality indicators of harmful algal blooms in a shallow hypereutrophic lake | Better understanding of physicochemical and meteorological variables on water quality indicators of harmful algal blooms in a shallow hypereutrophic lake | Better understanding of physicochemical and meteorological variables on water quality indicators of harmful algal blooms in a shallow hypereutrophic lake | Developed in-house | FALSE | Better understanding of physicochemical and meteorological variables on water quality indicators of harmful algal blooms in a shallow hypereutrophic lake | water-quality data | FALSE | None of the Above | FALSE | |||||||||||||||
| Department Of The Interior | USGS | DOI-0218 | Ecological niche models for bat species [2024 INV#WO0000000109233] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | We are trying to understand what environmental factors determine the presence and absence of bat species across their range. | understand what environmental factors determine the presence and absence of bat species across their range | environmental factors that determine the presence and absence of bat species | 01/01/2022 | Developed in-house | FALSE | environmental factors that determine the presence and absence of bat species | bat presence locations, environmental raster data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0217 | Development of a Strategic Framework for Use and Implementation of Machine Learning in Energy Resource Program Workflows [2024 INV#WO0000000109216] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | development of a strategic framework for integrating Energy Resources Program science with traditional information technology related platforms | position the ERP to more effectively deliver its unique data-driven information products | (1) adoption of ML pipelines/models in ERP project workflows; (2) modernization of key ERP data assets through API extension ; and (3) technology transfer, targeted training, and multi-disciplinary career development for existing geospatial ERP workforce | FALSE | (1) adoption of ML pipelines/models in ERP project workflows; (2) modernization of key ERP data assets through API extension ; and (3) technology transfer, targeted training, and multi-disciplinary career development for existing geospatial ERP workforce | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0216 | Quantifying Watershed Controls on Fine Sediment Flux to Lake Tahoe, California/Nevada [2024 INV#WO0000000109215] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | estimate watershed parameters of importance that drive sediment flux | Ability to better estimate watershed parameters of importance that drive sediment flux. | quantified watershed parameters | 10/01/2019 | Developed in-house | FALSE | quantified watershed parameters | Stage and turbidity from NWIS, water balance variables from Western Land Data Assimilation (NASA) land surface model | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0215 | Seismology of Magmatic Injection [2024 INV#WO0000000109214] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | understand the nature and dynamics of seismic sources associated with magmatic injection and magmatic transport | greater understanding of volcanic systems | understand the nature and dynamics of seismic sources associated with magmatic injection and magmatic transport | 10/01/2023 | Developed in-house | FALSE | understand the nature and dynamics of seismic sources associated with magmatic injection and magmatic transport | seismic data collected during the joint USGS/NSF Kilauea Imaging experiment , gravity data, and geodetic data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0214 | Earthquake Catalog Development [2024 INV#WO0000000109208] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | develop more complete and robust earthquake catalogs | volcanic earthquake catalog enhancement using integrated detection, matched-filtering, and relocation tools | more complete and robust earthquake catalogs | 10/01/2021 | Developed in-house | FALSE | more complete and robust earthquake catalogs | Seismic data collected by HVO during a nodal deployment across Pahala, HI | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0213 | Seedling Identification and Percent Growth Analysis [2024 INV#WO0000000109200] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | extraction of alphanumeric labels and analyze seedling growth in petri dish images | saving time and reducing human error | alphanumeric labels from petri dish images | 10/01/2023 | Developed in-house | FALSE | alphanumeric labels from petri dish images | Numerous images of seedlings taken over a span of 5 days. Around 2000 images in total. | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0212 | Gulf Coast Geologic Energy Machine Learning [2024 INV#WO0000000109198] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | predict expected ultimate recovery of shale oil wells | predict total organic carbon | ML model using elemental data to predict total organic carbon | 10/01/2023 | Developed in-house | FALSE | ML model using elemental data to predict total organic carbon | oil and gas well productivity and resource recovery data and recovery decline curves, geological and oil/gas basin data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0211 | Predicting Sparse (Geothermal) Resources Availability by using Machine Learning [2024 INV#WO0000000109195] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | developing new ML metrics for evaluating model performance that work with sparse natural resources, addressing the extreme mathematical sparsity of these resources at the regional scale, and engineering new evidence layers to inform modeling workflows | increasing the explainability, reproducibility, and accessibility of the assessment modeling process | new ML metrics for evaluating model performance | FALSE | new ML metrics for evaluating model performance | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0210 | Using machine learning to detect invasive bullfrogs [2024 INV#WO0000000109159] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Detecting bullfrogs along their invasion front in order to inform removal efforts | rapid detection | identification of invasive bullfrogs | 05/01/2020 | Developed in-house | FALSE | identification of invasive bullfrogs | audio recordings | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0209 | Deep Learning application for automated mapping of surficial landforms, surficial geological deposits, and abandoned mine sites from lidar-derived topography [2024 INV#WO0000000109153] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | mapping of surficial landforms, surficial geological deposits, and abandoned mine sites | automation of mapping | maps of surficial landforms, surficial geological deposits, and abandoned mine sites | 10/01/2024 | Developed in-house | FALSE | maps of surficial landforms, surficial geological deposits, and abandoned mine sites | lidar-derived topography | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0208 | Oil Spill Response for Ice-Covered Rivers [2024 INV#WO0000000109142] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The goal of this DOI Inland Oil Spill Preparedness Program (IOSPP) funded work is to provide rapid, near real-time information to oil spill response crews concerning about the safety of ice-covered areas | provide rapid, near real-time information to oil spill response crews concerning about the safety of ice-covered areas | near real-time information to oil spill response crews concerning about the safety of ice-covered areas | Developed in-house | FALSE | near real-time information to oil spill response crews concerning about the safety of ice-covered areas | unkown | FALSE | None of the Above | FALSE | |||||||||||||||
| Department Of The Interior | USGS | DOI-0207 | Pacific Northwest Stream Flow Permanence [2024 INV#WO0000000109137] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | streamflow classification of perennial versus non-perennial which is the charge of many land steward agencies | inform management decisions that require streamflow classification of perennial versus non-perennial | models used to inform management decisions that require streamflow classification of perennial versus non-perennial | 10/01/2023 | Developed in-house | FALSE | models used to inform management decisions that require streamflow classification of perennial versus non-perennial | unkown | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0206 | SAMPLE Toolbox [2024 INV#WO0000000109123] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | monitoring vegetation | ability for land managers to develop plans for monitoring vegetation | A toolbox for land managers to develop plans for monitoring vegetation | Developed in-house | FALSE | A toolbox for land managers to develop plans for monitoring vegetation | FALSE | FALSE | |||||||||||||||||
| Department Of The Interior | USGS | DOI-0205 | Mapping wildfire fuels in previously burned landscapes [2024 INV#WO0000000109121] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | understand how land management treatments affect the probability of reburning | understand how land management treatments affect the probability of reburning | probability of reburning | FALSE | probability of reburning | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0204 | Lava lake thermal pattern classification using self organizing maps and relationships to eruption processes at Kilauea Volcano, Hawaii [2024 INV#WO0000000109098] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | classify lava lake thermal patterns | ability to classify Lava lake thermal patterns from thermal infrared time-lapse imagery | classified lava lake thermal patterns | 10/01/2018 | Developed in-house | FALSE | classified lava lake thermal patterns | infrared time-lapse imagery | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0203 | Advancing image-based surveys to support sea duck conservation along the Pacific Flyway [2024 INV#WO0000000109096] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Safety, expense, observer bias and lack of methodological consistency are rising concerns associated with observer-based surveys, making it imperative to transition to more sustainable methods. | Digital aerial surveys (DAS) that automate counts from aerial imagery using convolutional neural network (CNN) models are one way to improve survey safety and count accuracy. | Standardized DAS for the lower Pacific Flyway to help maximize safety, while improving data consistency and model accuracy among important regions within the Flyway. | FALSE | Standardized DAS for the lower Pacific Flyway to help maximize safety, while improving data consistency and model accuracy among important regions within the Flyway. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0202 | InSAR and other geodetic studies at Volcanoes [2024 INV#WO0000000109093] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | recognize transient signals in combined InSAR and GPS data that may be indications of impending hazardous volcanic activity | help predict hazardous volcanic activity | identification of hazardous volcanic activity | 01/01/2024 | Developed in-house | FALSE | identification of hazardous volcanic activity | ImageNet database, Sentinel 1 InSAR data, GNSS data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0201 | Climate Futures for Lizards and Snakes in Western North America [2024 INV#WO0000000109092] | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Science | Deployed | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Identifying new management challenges to reptiles based on shifting environmental conditions | ability to identify new management challenges to reptiles based on shifting environmental conditions | management challenges to reptiles based on shifting environmental conditions | Developed in-house | FALSE | management challenges to reptiles based on shifting environmental conditions | point based occurrence, raster elevation data, modeled climate data | FALSE | None of the Above | FALSE | |||||||||||||||
| Department Of The Interior | USGS | DOI-0200 | Predicting inundation dynamics of small forested wetlands [2024 INV#WO0000000109089] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | better understand the wetting/drying dynamics of small wetlands relevant to amphibians | help land managers in the Upper Midwest understand the wetting/drying dynamics of small wetlands relevant to amphibians | wetting/drying dynamics of small wetlands relevant to amphibians | FALSE | wetting/drying dynamics of small wetlands relevant to amphibians | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0199 | Machine-learning model to delineate sub-surface agricultural drainage from satellite imagery [2024 INV#WO0000000109078] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | delineate sub-surface agricultural drainage | ability to delineate sub-surface agricultural drainage from satellite imagery | classification of sub-surface agricultural drainage | 05/11/2023 | Developed in-house | FALSE | classification of sub-surface agricultural drainage | Satellite imagery included acquisition dates from 2008 to 2020. | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0198 | Environmental streamflows in the United States: historical patterns and predictions [2024 INV#WO0000000109075] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | It is important that environmental streamflow assessments by water managers consider changes in climate, land use, and water management; this cannot be done effectively without understanding historical variability and changes in environmental streamflows | Estimates of environmental streamflows for ungaged streams | estimates of environmental streamflows for thousands of ungaged stream reaches | FALSE | estimates of environmental streamflows for thousands of ungaged stream reaches | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0197 | Extracting robust, searchable data from narrative geologic descriptions [2024 INV#WO0000000109022] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Generative AI | Extracting robust, searchable data from narrative geologic descriptions | time savings | searchable geologic description data | 10/01/2024 | Developed in-house | FALSE | searchable geologic description data | Descriptions of geologic units taken from published reports. | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0196 | Classifying GPS data to understand flight behavior of birds [2024 INV#WO0000000109015] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | understand under what circumstances eagles are more likely to collide with wind turbines | better understand circumstances where eagles are more likely to collide with wind turbines | classification of the flight behavior of birds | 11/01/2019 | Developed in-house | FALSE | classification of the flight behavior of birds | animal tracking data - GPS telemetry | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0195 | Whole-lake indexing of round goby abundances with photographic catch data [2024 INV#WO0000000109010] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | quantify abundances of one of the most abundant prey fishes in the Great Lakes, an invasive species called Round Goby | create a more effective method of monitoring abundances of prey fish across the entirety of the Great Lakes | quantified round goby abundances | 02/01/2019 | Developed in-house | FALSE | quantified round goby abundances | Image and position data from autonomous underwater vehicles; LiDAR bathymetry data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0194 | Predicting PFAS in shallow soils in northern New England [2024 INV#WO0000000108973] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | predict PFAS in soils across Maine, New Hampshire, and Vermont | more accurate prediction of PFAS in soils across Maine, New Hampshire, and Vermont | predictions of PFAS in soils across Maine, New Hampshire, and Vermont | 10/01/2023 | Developed in-house | FALSE | predictions of PFAS in soils across Maine, New Hampshire, and Vermont | Shallow soil PFAS data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0193 | Improving accuracy and precision of sonar-based estimates of fish abundance [2024 INV#WO0000000108800] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Sonar-based estimates of fish abundance are prone to inaccuracies that can limit their utility | improved accuracy and precision of USGS's annual prey fish abundance estimates | annual prey fish abundance estimates | 01/01/2023 | Developed in-house | FALSE | annual prey fish abundance estimates | Sonar transect data collected by conventional vessels and uncrewed surface vehicles | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0192 | Machine learning-based landscape feature classification using satellite and airborne imagery [2024 INV#WO0000000108791; 2024 INV#WO0000000108794] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | need to increase the accuracy of habitat and land cover classifications | enhanced accuracy of habitat and land cover classifications | habitat and land cover classifications | 08/01/2013 | Purchased from a vendor | ESRI | FALSE | habitat and land cover classifications | Airborne and Satellite imagery - often with required field-based training data | FALSE | None of the Above | FALSE | |||||||||||||
| Department Of The Interior | USGS | DOI-0191 | Predicting PFAS occurrence in groundwater using machine learning [2024 INV#WO0000000108780] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | predict PFAS occurrence in groundwater at the depths of drinking water supplies across the conterminous U.S. | better understand the occurrence of PFAS in groundwater | predictions of PFAS occurrence in groundwater at the depths of drinking water supplies across the conterminous U.S. | 10/01/2023 | Developed in-house | FALSE | predictions of PFAS occurrence in groundwater at the depths of drinking water supplies across the conterminous U.S. | Groundwater well data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0190 | Machine Learning Image Classification of Wetlands and Soil moisture [2024 INV#WO0000000108779] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | inform land managers, planners, and researchers about historical and current changes to human and natural environments, focused on floods, droughts, and fires | ability to classify wetlands and soil moisture at large scales | quantification of causal processes behind wildfire | 10/01/2023 | Developed with both contracting and in-house resources | FALSE | quantification of causal processes behind wildfire | Training Samples, Raster Images | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0189 | Zero shot segmentation to expedite Quaternary geologic mapping [2024 INV#WO0000000108739; 2024 INV#WO0000000109150] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | The construction of detailed geologic maps requires a lot of manual GIS data input to outline the extent of interpreted geologic features. | expedite the process of creating GIS data for geologic maps | GIS data for geologic maps | FALSE | GIS data for geologic maps | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0188 | Inventorying landforms with convolutional neural networks [2024 INV#WO0000000108738; WO0000000109117] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | efficiently identify and inventory these features landform features from LiDAR derived topographic data images | efficient identification of landform features | inventory of landform features | 02/01/2024 | Developed in-house | FALSE | inventory of landform features | Digital elevation models | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0187 | Tracking wetlands and water movement across watersheds [2024 INV#WO0000000108734] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | Accurate prediction of flood and drought impacts requires understanding upstream surface water storage dynamics and storage capacity | classify satellite imagery into open and vegetated water extent, use deep learning algorithms to relate daily river discharge to meteorology and surface water storage dynamics | upstream surface water storage dynamics and storage capacity | FALSE | upstream surface water storage dynamics and storage capacity | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0186 | Everglades-Flux, Digital Surveys [2024 INV#WO0000000108630] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | automatically process Normalized Difference Vegetation Index images and come up with a true value of live vegetation and fill in missing data | automatically process Normalized Difference Vegetation Index images | true value of live vegetation | FALSE | true value of live vegetation | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0185 | Shoreline modeling [2024 INV#WO0000000108297; 2024 INV#WO0000000109312] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | predict shoreline evolution and compare their accuracy to traditional physics-based models | increased accuracy of shoreline evolution | predict shoreline evolution | 10/01/2023 | Developed in-house | FALSE | predict shoreline evolution | shoreline time series data, satellite imagery, oceanographic time series data | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | FWS | DOI-0184 | Summarization of documents and output to ECOSphere species workflow [2024 INV#DOI-63] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Other | The ECOSphere species workflow relies on extracting relevant ecological and biological insights from a vast and continuously growing repository of unstructured documents, currently numbering in the millions. Manual review and summarization of these documents is infeasible due to scale, time constraints, and resource limitations. There is a critical need for an AI-driven solution that can automatically ingest, analyze, and summarize large volumes of scientific and technical documents, and seamlessly output structured summaries into the ECOSphere workflow. This will enhance data accessibility, accelerate species-related research, and support timely decision-making in environmental and conservation efforts. | Implementing AI-powered document summarization for the ECOSphere species workflow will significantly enhance operational efficiency by automating the extraction of key insights from millions of unstructured documents. This will reduce manual workload, acc | Structured Summaries of Documents Concise, machine-readable summaries of scientific, regulatory, and technical documents. Key metadata extraction (e.g., species name, habitat, threats, geographic location, publication date). Relevance Scoring AI-generated confidence scores indicating the relevance of each document to specific species or ecological topics. Taxonomic and Thematic Tagging Automatic tagging of documents with species names, ecological terms, and conservation themes to support search and filtering. Workflow-Ready Data Packages Summarized content formatted for direct ingestion into ECOSphere workflows (e.g., JSON, XML, or database-ready formats). Audit Trail and Traceability Links to original documents and AI-generated summaries for transparency and validation. Integration Logs and Metrics Reports on the number of documents processed, summary accuracy, and integration success rates. | FALSE | Structured Summaries of Documents Concise, machine-readable summaries of scientific, regulatory, and technical documents. Key metadata extraction (e.g., species name, habitat, threats, geographic location, publication date). Relevance Scoring AI-generated confidence scores indicating the relevance of each document to specific species or ecological topics. Taxonomic and Thematic Tagging Automatic tagging of documents with species names, ecological terms, and conservation themes to support search and filtering. Workflow-Ready Data Packages Summarized content formatted for direct ingestion into ECOSphere workflows (e.g., JSON, XML, or database-ready formats). Audit Trail and Traceability Links to original documents and AI-generated summaries for transparency and validation. Integration Logs and Metrics Reports on the number of documents processed, summary accuracy, and integration success rates. | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0183 | Cell Phone Application for Oil Spill Detection [2024 INV#WO0000000108285] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | develop a model that can be used to interpret cell phone images to predict oil in environmental samples | The tool can be rapidly deployed for use in the field by the oil spill responder community. | prediction of oil in samples | FALSE | prediction of oil in samples | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | USGS | DOI-0182 | Wave runup and total water level observations from time series imagery at several sites with varying nearshore morphologies [2024 INV#WO0000000108262] | Pilot The use case has been deployed in a limited test or pilot capacity. | Science | Pilot | c) Not high-impact | Not high-impact | Computer Vision | separation (segmentation) of land and water in images | ability to compare actuals to forecasted water levels | calculated water levels | 10/01/2024 | Developed in-house | FALSE | calculated water levels | imagery | FALSE | None of the Above | FALSE | ||||||||||||||
| Department Of The Interior | USGS | DOI-0181 | National Wildlife Disease Database (NWDD) [2024 INV#WO0000000108149; 2024 INV#WO0000000109192] | Pre-deployment The use case is in a development or acquisition status. | Science | Pre-deployment | c) Not high-impact | Not high-impact | Classical/Predictive Machine Learning | bring together various wildlife health data streams across informational domains (i.e., laboratory results, environmental observations, news media, etc.) | visualize and contextualize information from one or more sources | advanced analytics to natural resource authorities | FALSE | advanced analytics to natural resource authorities | FALSE | FALSE | ||||||||||||||||||
| Department Of The Interior | OS | DOI-0180 | Office of Grants Management (PGM) Grants Utility Tool | Deployed The use case is being actively authorized or utilized to support the functions or mission of an agency. | Procurement & Financial Management | Deployed | c) Not high-impact | Not high-impact | Agentic AI | PGM faced growing operational and compliance challenges across the entire financial assistance lifecycle. Manual processesproject description reviews, pre-award SAM.gov validations, and detailed budget analyseswere extremely labor-intensive, inconsistent across bureaus, and vulnerable to human error. Staff were required to review thousands of records each year, including over 8,000 project descriptions, more than 13,000 pre-award validation actions, and more than 8,000 detailed budget submissions. Each task required extensive reading, cross-checking across multiple systems, and detailed documentation. These demands strained a shrinking grants workforce, delayed internal control reviews, increased the risk of compliance failures under 2 CFR 200, and diverted staff from higher-value oversight activities. The Department needed a standardized, accurate, and scalable way to conduct internal controls testing, ensure timely eligibility checks, and complete budget reviews without overwhelming staff resources or jeopardizing compliance. | Automated analysis increased objectivity, removed inconsistencies in how staff interpreted regulatory requirements, and provided faster, more reliable information to support program decisions. | The combined AI tools automatically generate standardized compliance outputs across project descriptions, budget reviews, and entity validations, replacing thousands of hours of manual analysis. They produce automated scoring, flags for risks or inconsistencies, cross-walks between budget documents, and complete audit-ready records aligned with internal control requirements. Together, these outputs streamline oversight, strengthen regulatory compliance, and create a consistent, defensible documentation trail for more than 29,000 annual financial assistance actions. | 04/10/2024 | Developed in-house | TRUE | The combined AI tools automatically generate standardized compliance outputs across project descriptions, budget reviews, and entity validations, replacing thousands of hours of manual analysis. They produce automated scoring, flags for risks or inconsistencies, cross-walks between budget documents, and complete audit-ready records aligned with internal control requirements. Together, these outputs streamline oversight, strengthen regulatory compliance, and create a consistent, defensible documentation trail for more than 29,000 annual financial assistance actions. | Various public sources | sam.gov | FALSE | None of the Above |