In the rapidly evolving landscape of scientific research and regulatory oversight, advanced technological solutions play a pivotal role in enhancing efficiency, accuracy, and insight. This collection of use cases highlights the diverse applications of artificial intelligence (AI), machine learning (ML), and natural language processing (NLP) in various sectors, including food safety, drug regulation, and scientific research. From the development of predictive models for antimicrobial resistance to the creation of intelligent knowledge discovery platforms, these initiatives represent a significant leap towards more effective and informed decision-making. By integrating cutting-edge technologies, these projects aim to address complex challenges, streamline processes, and improve outcomes across multiple domains.
The Information Gateway hotline, powered by OneReach AI, connects callers to a phone IVR system that provides access to state hotlines for reporting child abuse and neglect, tailored to the caller’s area code. In addition to this service, OneReach offers a FAQ texting service that employs natural language processing to address user inquiries. The data collected from user queries is utilized for reinforcement training by human AI trainers, contributing to the continuous development of additional FAQs and improving the overall effectiveness of the service.
The AHRQ Search initiative aims to enhance the organization’s search capabilities by incorporating features such as relevancy tailoring, auto-generated synonyms, and automated suggestions. This comprehensive search tool also provides suggested related content and auto-tagging functionalities, along with a “Did you mean” feature to assist users in finding specific information more efficiently. By improving the search experience, the project seeks to facilitate better access to relevant content for users across the organization.
This project involves the development of a chatbot that serves as an interactive interface for users to ask questions about AHRQ content. By enabling conversational inquiries, the chatbot aims to replace the traditional public inquiry telephone line, making it easier and more efficient for users to access information and support.
The ReDIRECT project leverages artificial intelligence to discover candidates for drug repurposing. This initiative focuses on evaluating existing pharmaceuticals to find new therapeutic applications, thereby potentially speeding up the availability of effective treatments for various health conditions.
This project employs AI-based algorithms integrated with the Accuro XV system to detect and highlight fractures and soft tissue injuries. By enhancing diagnostic capabilities, this technology aims to improve the accuracy and speed of injury assessments, ultimately benefiting patient care in emergency and clinical settings.
This initiative utilizes AI-based algorithms within the Lumify handheld ultrasound system to identify lung injuries and infectious diseases. By providing rapid and accurate diagnostics, this technology aims to enhance clinical decision-making and improve patient outcomes in critical care situations.
This project aims to accurately determine the depth severity and size of burn injuries using advanced imaging techniques. By providing precise assessments, this initiative seeks to enhance treatment planning and improve outcomes for patients suffering from burn injuries.
This project involves a continuous monitoring platform equipped with AI algorithms designed to assess the severity of COVID-19 in patients. By providing real-time data and insights, this initiative aims to improve patient management and facilitate timely interventions in healthcare settings.
The Digital MCM: Visual Dx project utilizes smartphone imaging technology combined with artificial intelligence to detect the presence of mPox. This innovative approach aims to enhance accessibility and speed in diagnosing mPox, allowing for timely interventions and better public health responses.
The Host-Based Diagnostics: Patchd initiative involves a wearable device integrated with an AI model designed to predict the onset of sepsis in patients at home. By enabling early detection of this critical condition, the project aims to improve patient outcomes and facilitate timely medical interventions, potentially saving lives.
The Data Modernization project focuses on creating an open data management architecture that enhances business intelligence (BI) and machine learning (ML) capabilities across all ASPR data. This initiative aims to streamline data access and analysis, ultimately improving decision-making processes and operational efficiency within health services.
This project employs artificial intelligence and machine learning tools to process vast amounts of threat data, enhancing the ability to detect and respond to cyber threats. By leveraging advanced analytics, the initiative aims to improve cybersecurity measures and protect sensitive health information from potential breaches.
The emPOWER initiative harnesses artificial intelligence to quickly develop tools and programs aimed at identifying and supporting populations at risk during the COVID-19 pandemic. By focusing on at-risk groups, this project seeks to enhance public health responses and ensure that vulnerable communities receive the necessary resources and support.
The Community Access to Testing project employs multiple machine learning models to predict surges in COVID-19 cases within communities. By accurately forecasting these trends, the initiative aims to improve testing accessibility and resource allocation, ultimately enhancing public health preparedness and response efforts.
This project focuses on developing modeling tools and conducting analyses to prepare for biothreat events. By refining these models during emergent situations, the initiative aims to enhance situational awareness and improve response strategies, ensuring that health services are better equipped to handle potential threats.
The Ventilator Medication Model utilizes a generalized additive model to project the rate of COVID-19 patients requiring ventilation. By providing accurate forecasts, this project aims to assist healthcare providers in resource planning and management, ensuring that adequate ventilatory support is available for patients in need.
This initiative employs artificial intelligence and modeling techniques to optimize the redistribution of medical products among partners, including jurisdictions, pharmacies, and federal entities. By considering factors such as distance, ordering patterns, and equity, the project aims to enhance the efficiency of product distribution, ensuring that resources are allocated effectively where they are most needed.
This project focuses on optimizing the movement of highly infectious patients using a limited number of transport containers. By analyzing factors such as distance and population density, the initiative aims to enhance planning and decision-making processes, ensuring that patient transport is conducted safely and efficiently during health emergencies.
The TowerScout project utilizes aerial imagery combined with advanced object detection and image classification models to identify cooling towers. These structures are known potential sources of Legionnaires’ Disease outbreaks in communities. By automating the detection process, TowerScout aims to enhance public health investigations and facilitate timely interventions during outbreaks.
The HaMLET initiative employs computer vision models to analyze chest x-rays for the detection of tuberculosis (TB). This technology aims to enhance the quality of health screenings conducted overseas for immigrants and refugees seeking entry into the United States, thereby improving early detection and treatment of TB in vulnerable populations.
This project applies zero-shot learning techniques to identify and classify reports of menstrual irregularities that have been associated with COVID-19 vaccinations. By leveraging this advanced machine learning approach, the initiative aims to enhance the understanding of vaccine side effects and improve public health monitoring related to vaccination outcomes.
This validation study focuses on the application of deep learning algorithms to detect diabetic retinopathy in retinal photos collected through the National Health and Nutrition Examination Survey (NHANES). The goal is to assess whether these AI algorithms can effectively replace traditional ophthalmologist grading, potentially streamlining public health surveillance of eye diseases and improving early detection efforts.
This project involves a team of scientists developing a computer vision model to automate the extraction of sidewalk networks from street-level images sourced from Mapillary. By accurately identifying the presence of sidewalks, this initiative aims to support urban planning and public health efforts related to physical activity and mobility in communities.
This initiative focuses on developing machine learning techniques to analyze GPS-based data from smartphone applications to identify walking and bicycling trips. By utilizing commercially available location data, the project aims to produce geocoded data tables, GIS layers, and maps that can inform public health strategies and promote active transportation in communities.
This project aims to develop machine learning techniques to identify infrastructure that supports physical activity, such as sidewalks and bicycle lanes, in both satellite and roadway images. By analyzing image-based data, the initiative seeks to generate geocoded data tables, maps, and summary reports that can aid in urban planning and public health initiatives promoting active lifestyles.
This initiative focuses on using natural language processing and machine learning techniques to analyze state and local policy provisions that either promote or inhibit the creation of healthy built environments. By processing various types of policy texts, the project aims to produce datasets that quantify relevant aspects of these policies. As of April 2023, the Division of Nutrition, Physical Activity, and Obesity (DNAPO) is collaborating with contractors to explore the effectiveness of these methods compared to traditional approaches, while also identifying related efforts within the CDC and academic institutions.
This project involves the development of a Natural Language Processing (NLP) tool designed for topic modeling to automate the review of public comments submitted in response to notices of proposed rulemaking. By clustering these comments based on their content, the tool aims to enhance the efficiency of the review process, allowing for more effective analysis and incorporation of public feedback into regulatory decision-making.
The Sequential Coverage Algorithm (SCA) and partial Expectation-Maximization (EM) estimation are advanced machine learning techniques implemented by the CDC’s National Center for Health Statistics (NCHS) to enhance data linkage processes. The SCA, a supervised algorithm, helps create effective joining methods for large datasets, while the unsupervised EM estimation estimates the proportion of matching pairs within these groups. Together, these methods significantly improve the accuracy and efficiency of linking health data, facilitating better public health analysis and decision-making.
This project involves the use of MedCoder to assign ICD-10 codes to the cause of death information recorded on death certificates. By translating the literal text descriptions provided by certifiers into standardized codes, this initiative ensures accurate classification of both underlying and contributing causes of death, which is essential for public health reporting and analysis.
This initiative focuses on analyzing clinical notes to identify instances of illicit use and misuse of stimulant and opioid medications. By leveraging advanced data analysis techniques, the project aims to enhance monitoring and intervention strategies for substance misuse, ultimately contributing to improved public health outcomes.
The AI/ML Model Release Standards project at NCHS aims to establish comprehensive guidelines for the release of artificial intelligence and machine learning models used within the Center. These standards are intended to ensure consistency and quality across AI/ML projects and may serve as a foundational framework for developing broader standards throughout the CDC, promoting best practices in AI/ML development and deployment.
This initiative involves the development of a Named Entity Recognition (NER) model using natural language processing (NLP) techniques to analyze electronic health records from the National Hospital Care Survey. The model aims to accurately detect assertions or negations of opioid use within clinical notes, thereby enhancing the understanding of opioid prescribing patterns and misuse in healthcare settings.
The Nowcasting Suicide Trends project focuses on creating an interactive dashboard that integrates various traditional and non-traditional datasets to provide real-time insights into national suicide death trends. By employing a multi-stage machine learning pipeline, this initiative aims to deliver timely and actionable data that can inform public health strategies and interventions aimed at reducing suicide rates.
The Feedback Analysis Solution (FAS) is designed to enhance the review of public comments and other relevant information from stakeholders by utilizing data from CMS and publicly available sources like Regulations.Gov. By employing Natural Language Processing (NLP) tools, FAS efficiently aggregates, sorts, and identifies duplicate comments, streamlining the review process. Additionally, machine learning (ML) techniques are applied to extract key topics, themes, and sentiment from the dataset, providing valuable insights for decision-making.
The Predictive Intelligence (PI) system is implemented within the Quality Service Center (QSC) to optimize incident assignment. By analyzing short descriptions provided by users through the ServiceNow Service Portal, the system identifies keywords that match previously submitted incidents, allowing for efficient routing of tickets to the appropriate assignment groups. This solution is regularly updated and re-trained with incident data every 3-6 months to ensure its effectiveness and adaptability to changing needs.
The Fraud Prevention System Alert Summary Report Priority Score model is being developed to analyze Medicare administrative and claims data, along with fraud alert and investigation information. Its primary goal is to predict the likelihood that an investigation will result in an administrative action, thereby assisting CMS in prioritizing their investigative resources effectively. As the model is still under development, the final specifications and methodologies are yet to be finalThe Center for Program Integrity (CPI) has developed several fraud prevention models, such as DMEMBITheftML and HHAProviderML, which utilize Medicare administrative and claims data to detect potential cases of fraud, waste, and abuse. By employing random forest techniques, these models generate alerts for investigators, highlighting potential fraud schemes and the providers involved, thereby enhancing the effectiveness of fraud detection efforts.
ized.
The Center for Program Integrity (CPI) has developed several fraud prevention models, such as DMEMBITheftML and HHAProviderML, which utilize Medicare administrative and claims data to detect potential cases of fraud, waste, and abuse. By employing random forest techniques, these models generate alerts for investigators, highlighting potential fraud schemes and the providers involved, thereby enhancing the effectiveness of fraud detection efforts.
The Priority Score Model is designed to rank healthcare providers within the Fraud Prevention System (FPS) based on program integrity guidelines. By utilizing inputs such as Medicare claims data, Targeted Probe and Educate (TPE) data, and jurisdiction information, the model applies logistic regression techniques to generate rankings that help identify providers who may require further scrutiny or intervention.
The Priority Score Timeliness project focuses on forecasting the time required to address alerts generated by the Fraud Prevention System (FPS). By analyzing inputs such as Medicare claims data, TPE data, and jurisdiction information, the project employs various machine learning techniques, including Random Forest, Decision Tree, Gradient Boosting, and Generalized Linear Regression, to provide accurate time estimates for alert resolution, thereby improving resource allocation and efficiency in fraud investigations.
The CCIIO Enrollment Resolution and Reconciliation System (CERRS) utilizes artificial intelligence for classification purposes. This system aims to streamline the enrollment resolution process by effectively categorizing data, thereby enhancing the efficiency and accuracy of enrollment management within the Center for Consumer Information and Insurance Oversight (CCIIO).
The CCIIO Enrollment Resolution and Reconciliation System (CERRS) utilizes artificial intelligence for classification purposes. This system aims to streamline the enrollment resolution process by effectively categorizing data, thereby enhancing the efficiency and accuracy of enrollment management within the Center for Consumer Information and Insurance Oversight (CCIIO).
The CMS Connect (CCN) project leverages artificial intelligence to enhance global search capabilities within the CMS framework. By improving search functionalities, this initiative aims to facilitate easier access to information and resources across the CMS network, ultimately supporting better decision-making and operational efficiency.
The CMS Enterprise Portal Services project focuses on developing an AI-powered chatbot aimed at enhancing process efficiency within the CMS Enterprise Portal. This chatbot is designed to provide quick and accurate responses to user inquiries, streamlining workflows and improving knowledge management. By facilitating easier access to critical information and resources, the chatbot enhances the overall user experience for staff and stakeholders, ultimately supporting better decision-making and operational effectiveness within the organization.
The Federally Facilitated Marketplaces (FFM) project utilizes artificial intelligence to enhance anomaly detection, correction, classification, and forecasting within the marketplace data. By applying advanced algorithms to time series data, this initiative aims to identify irregular patterns and trends, enabling more accurate predictions and timely interventions to improve marketplace operations and decision-making.
The Marketplace Learning Management System (MLMS) project employs artificial intelligence to facilitate language interpretation and translation services. This initiative aims to improve accessibility and understanding of marketplace information for diverse populations, ensuring that language barriers do not hinder individuals from accessing essential resources and support.
The Medicaid and CHIP Financial (MACFin) team has developed a machine learning model specifically designed to detect anomalies within Disproportionate Share Hospital (DSH) audit data. This model identifies the top 1-5% of outliers based on extreme behaviors in the data, such as unusual amounts or characteristics, facilitating targeted investigations into potential gaps and barriers. By flagging these anomalies, the model helps minimize overpayments and underpayments, ensuring more accurate financial distributions and supporting effective auditing processes.
The MACFin team has developed a forecasting model to predict future Disproportionate Share Hospital (DSH) payments for the upcoming year, utilizing historical data and trends from the past 1-3 years. By training multiple models, including time series and machine learning approaches, the team identified the most effective model based on average mean error in predicting DSH payment amounts across hospitals. Given the disorganized nature of DSH data, significant effort was invested in cleaning and consolidating over six years of data from all states. This predictive capability not only aids in early planning and trend analysis but can also be adapted to forecast other DSH-related metrics, such as payment-to-uncompensated ratios and instances of underpayment or overpayment.
The Performance Metrics Database and Analytics (PMDA) project leverages artificial intelligence for various functions, including anomaly detection and correction, language interpretation and translation, and knowledge management. By utilizing AI technologies, this initiative aims to enhance the accuracy and efficiency of performance metrics analysis, improve communication across language barriers, and streamline the management of knowledge resources within the organization.
The Relationships, Events, Contacts, and Outreach Network (RECON) project employs artificial intelligence to develop a recommender system and conduct sentiment analysis. This initiative aims to enhance the understanding of stakeholder relationships and interactions by providing personalized recommendations and insights based on sentiment analysis, ultimately improving outreach efforts and engagement strategies.
The Risk Adjustment Payment Integrity Determination System (RAPIDS) utilizes artificial intelligence for classification purposes and to enhance process efficiency. By applying AI techniques, this system aims to improve the accuracy of risk adjustment payments and streamline the overall determination process, ensuring that payments are aligned with the appropriate risk levels and enhancing the integrity of the payment system.
This project focuses on analyzing historical drug cost increases to predict future trends in drug pricing. By leveraging past data, the initiative aims to provide insights into potential future cost escalations, enabling better financial planning and decision-making for healthcare providers and policymakers.
This initiative involves analyzing the market share of generic drugs in comparison to brand-name drugs over time, utilizing data from Part D claims volume. By forecasting future market shares, the project aims to provide valuable insights into trends in drug utilization, helping stakeholders make informed decisions regarding drug pricing and availability in the marketplace. This initiative involves analyzing the market share of generic drugs in comparison to brand-name drugs over time, utilizing data from Part D claims volume. By forecasting future market shares, the project aims to provide valuable insights into trends in drug utilization, helping stakeholders make informed decisions regarding drug pricing and availability in the marketplace.
This project focuses on detecting anomalies in drug costs associated with Part D claims. By identifying unusual pricing patterns or discrepancies, the initiative aims to enhance oversight and ensure that drug pricing remains fair and consistent, ultimately supporting the integrity of the healthcare system.
The Artificial Intelligence (AI) Explorers Program Pilot for Automated Technical Profile is a 90-day initiative aimed at researching and developing a machine-readable profile for CMS systems. This project seeks to create a “technology fingerprint” for CMS projects by analyzing various data sources throughout different stages of their development lifecycle, ultimately enhancing the understanding and management of technology applications within CMS.
The AI Explorers Program Pilot for Section 508 Accessibility Testing is a 90-day project designed to assist CMS technical leads and Application Development Organizations (ADOs) in conducting thorough analyses of test result data. This initiative supports the CMS Section 508 Program, which ensures that electronic and information technology is accessible to people with disabilities, thereby promoting inclusivity and compliance with accessibility standards.
This project aims to automate the processing of large volumes of submitted docket comments by utilizing artificial intelligence and machine learning techniques. The system will facilitate the transfer, deduplication, summarization, and clustering of comments, thereby streamlining the review process and enhancing the efficiency of stakeholder engagement and feedback analysis.
This initiative focuses on enhancing the detection of adverse events of special interest (AESI) related to vaccines by developing an appropriate machine learning model. By utilizing clinical-oriented language models pre-trained on clinical documents from UCSF, the project aims to refine the identification of AESI phenotypes, ultimately improving the monitoring and safety assessment of vaccines.
The BEST Platform is designed to enhance post-market surveillance of biologics by employing a range of applications and techniques for the semi-automated detection, validation, and reporting of adverse events. By utilizing machine learning (ML) and natural language processing (NLP) technologies, the platform effectively identifies potential adverse events from electronic health records (EHRs) and extracts critical features for clinician validation, thereby improving patient safety and regulatory oversight.
This project focuses on developing advanced machine learning approaches for selecting population pharmacokinetic models, which are essential for understanding drug behavior in different populations. The initiative includes the creation of a deep learning and reinforcement learning framework for model selection, as well as the implementation of a genetic algorithm approach in Python. These methodologies aim to enhance the accuracy and efficiency of model-based bioequivalence analysis, ultimately supporting better drug development and regulatory decisions.
This project aims to develop and implement a novel machine learning algorithm designed to estimate heterogeneous treatment effects, which will help prioritize the development of product-specific guidance (PSG). The initiative involves three key tasks: first, addressing the challenge of confounding variables in observational data by utilizing a variational autoencoder to simultaneously estimate hidden confounders and treatment effects; second, evaluating the model on synthetic datasets and established benchmarks to assess its interpretability; and third, validating the model with real-world PSG data in collaboration with the FDA team. The project will utilize publicly available datasets, such as the Orange Book and FDA PSGs, as well as internal data, to ensure comprehensive validation and applicability of the model.
This initiative focuses on enhancing the efficiency of product-specific guidance (PSG) reviews through the development of advanced tools based on text analysis and machine learning. The project includes creating a novel neural summarization model that integrates an information retrieval system, utilizing dual attention mechanisms for both sentence-level and word-level outputs. The new model will be evaluated using PSG data and the large CNN/Daily Mail dataset to ensure its effectiveness. Additionally, an open-source software package will be developed to facilitate the implementation of the text summarization model and the information retrieval system, promoting accessibility and collaboration in PSG review processes.
The BEAM (Bioequivalence Assessment Mate) project aims to create a data and text analytics tool designed to enhance the quality and efficiency of bioequivalence assessments. By utilizing verified data analytics packages, text mining techniques, and artificial intelligence (AI) toolsets, including machine learning (ML), the initiative seeks to streamline the labor-intensive processes involved in bioequivalence evaluations, ultimately facilitating more efficient and high-quality regulatory assessments.
This project focuses on developing new tools and methods for monitoring drug-induced adverse events (AEs) to enhance early signal detection and safety assessment of marketed drugs. By employing natural language processing (NLP) and data mining (DM) techniques, the initiative aims to extract relevant information from approved drug labeling for statistical modeling. This analysis will help determine when specific AEs are typically labeled (either pre- or post-market) and identify detection patterns, including predictive factors, within the first three years of a drug’s marketing. The project seeks to improve understanding of the timing and early detection of AEs, facilitating targeted monitoring of novel drugs. Funding will also support an ORISE fellow to contribute to this research.
The Centers of Excellence in Regulatory Science and Innovation (CERSI) project focuses on leveraging artificial intelligence to enhance remote interactions in four key areas identified by the FDA: transcription, translation, document and evidence management, and collaborative workspaces. The project utilizes advanced automatic speech recognition technology, specifically a transformer-based sequence-to-sequence (seq2seq) model, which is trained to generate accurate transcripts. Given the challenges of using pre-trained models that may not accommodate various accents or specialized terminology, researchers will manually transcribe a selection of video/audio materials to fine-tune the model for better performance in the regulatory context. Additionally, the project aims to develop a comprehensive system for managing documents and evidence, incorporating a document classifier, a video/audio classifier, and an interactive middleware to facilitate seamless access and sharing of documents among participants.
This project focuses on identifying novel synthetic opioids (NSOs) by analyzing publicly available social media and forensic chemistry data. By utilizing the FastText library, the initiative creates vector models for known NSO-related terms within a large corpus of social media text. The system provides users with similarity scores and expected prevalence estimates for various terms, thereby enhancing future data collection efforts and improving the understanding of emerging drug products in social media discourse.
This initiative involves the development of an artificial intelligence-based deduplication algorithm designed to identify duplicate individual case safety reports (ICSRs) within the FDA Adverse Event Reporting System (FAERS). By processing unstructured data from free-text narratives using natural language processing (NLP), the algorithm extracts relevant clinical features. It employs a probabilistic record linkage approach that combines both structured and unstructured data to effectively identify duplicates. This optimization allows for comprehensive processing of the entire FAERS database, facilitating enhanced data mining and analysis of adverse event reports.
The Information Visualization Platform (InfoViP) is designed to enhance post-market safety surveillance by improving the review and evaluation process of Individual Case Safety Reports (ICSRs). By incorporating artificial intelligence and advanced visualization techniques, InfoViP facilitates the detection of duplicate ICSRs, generates temporal data visualizations, and classifies ICSRs for better usability. This platform aims to increase the efficiency and scientific rigor of safety assessments, ultimately supporting more effective monitoring of drug safety.
This project aims to explore the use of unsupervised learning techniques to develop code mapping algorithms that can harmonize data across different healthcare systems within the Sentinel framework. By employing data-driven statistical methods, the initiative seeks to identify and reduce coding discrepancies, facilitating the transfer of knowledge and best practices between sites. The ultimate goal is to create scalable and automated solutions for harmonizing electronic health records (EHR) data, improving interoperability and data consistency across systems.
This project focuses on enhancing the ascertainment of date and cause of death through the development of algorithms that probabilistically link alternative data sources with electronic health records (EHRs). By creating generalizable approaches to improve mortality assessment, the initiative aims to enhance the validity of Sentinel investigations that utilize mortality as an endpoint. The project outlines two specific aims: first, to leverage publicly available online data to determine the date of death for patients from two healthcare systems; and second, to augment cause of death data by analyzing healthcare system narrative text and administrative codes to generate probabilistic estimates for common causes of death.
This study aims to develop a scalable, automated tool that utilizes natural language processing (NLP) to assist in chart abstraction and feature extraction from electronic medical records (EMRs). By leveraging claims and EHR data—encompassing structured, semi-structured, and unstructured formats—the project seeks to demonstrate the usability and value of these data sources in a pharmacoepidemiology context. The study will utilize real-world longitudinal data from Cerner Enviza EHRs linked to claims, applying NLP techniques to identify and contextualize pre-exposure confounding variables, integrate unstructured EHR data for confounding adjustment, and ascertain outcomes. A specific use case will investigate the relationship between montelukast use in asthma patients and neuropsychiatric events.
The MASTER PLAN Y4 outlines the mission of the Innovation Center to integrate longitudinal patient-level electronic health record (EHR) data into the Sentinel System. This integration aims to facilitate in-depth investigations of medication outcomes using more comprehensive clinical data than what is typically available through insurance claims. The Master Plan presents a five-year roadmap for achieving this vision, focusing on four strategic areas: (1) enhancing data infrastructure; (2) advancing feature engineering; (3) improving causal inference methodologies; and (4) developing detection analytics. The initiative emphasizes the use of emerging technologies, including natural language processing, advanced analytics, and data interoperability, to enhance the capabilities of the Sentinel System.
The Creating a Development Network project aims to establish a framework for converting structured data from electronic health records (EHRs) and linked claims into the Sentinel Common Data Model (SCDM) at participating sites. The project has two specific aims: first, to ensure that structured data is consistently transformed into the SCDM format; and second, to develop a standardized process for storing free text notes at each site. This includes creating procedures for routine metadata extraction from these notes, enabling direct access for investigators and facilitating timely execution of future tasks within the Sentinel system.
The project focuses on empirically evaluating signal detection approaches based on electronic health record (EHR) data. It aims to develop methodologies for abstracting and integrating both structured and unstructured EHR data, enhancing the ability to identify signals related to health outcomes that can only be detected through EHR data. This includes leveraging natural language processing (NLP) and laboratory values to improve the accuracy and comprehensiveness of signal detection in healthcare settings.
This project involves the development of an AI-powered label comparison tool designed to assist reviewers in identifying safety-related changes in drug labeling over time. By analyzing drug labels in PDF format, the tool utilizes BERT-based natural language processing to detect and highlight newly added safety issues. This capability supports the FDA’s efforts to update drug labeling based on postmarket data, ensuring that safety information is accurately reflected and communicated.
This initiative aims to create a prototype software application that enhances the review process of the FDA Adverse Event Reporting System (FAERS) data. By developing computational algorithms, the application will semi-automatically categorize FAERS reports into meaningful medication error categories based on free text narratives. The project leverages existing annotated reports and collaborates with subject matter experts to refine initial natural language processing (NLP) algorithms. An active learning approach will be employed to continuously improve the accuracy of report categorization, ultimately supporting better medication safety monitoring.
This project aims to enhance the detection of data anomalies in pharmacokinetic (PK) profiles related to Abbreviated New Drug Applications (ANDA). The Office of Biostatistics has developed an R Shiny application called DABERS (Data Anomalies in BioEquivalence R Shiny) to support the Office of Scientific Investigations (OSI) and the Office of Generic Drugs (OGD). The project addresses the complexity of PK and pharmacodynamic data, which cannot be adequately described by a single statistic. By employing advanced statistical methods, including machine learning and data augmentation, the initiative seeks to identify potential data manipulations and anomalies. The project has two main objectives: to provide a data-driven method for modeling complex PK patterns from a regulatory perspective, and to enhance understanding of drug response variability for public health research and drug development, ultimately guiding patient subgroup targeting and optimal dosing strategies.
The CluePoints CRADA project employs unsupervised machine learning techniques to detect and identify data anomalies within clinical trial data across various levels, including site, country, and subject. By considering multiple use cases, the project aims to enhance data quality and integrity, facilitate site selection for inspections, and assist reviewers in identifying potentially problematic sites for further sensitivity analyses. This initiative is crucial for ensuring the reliability and validity of clinical trial data.
The Clinical Study Data Auto-transcribing Platform, known as AI Analyst, is designed to autonomously generate clinical study reports from source data, thereby assessing the strength and robustness of analytical evidence for drug labeling. The platform transcribes Study Data Tabulation Model (SDTM) datasets from phase I/II studies into comprehensive clinical study reports with minimal human intervention. The underlying AI algorithm emulates the thought processes of subject matter experts, such as clinicians and statisticians, to accurately interpret study designs and results. The platform incorporates multiple layers of data pattern recognition to address the complexities of clinical study assessments, including diverse study designs and reporting formats. It has been trained on hundreds of New Drug Application (NDA) and Biologics License Application (BLA) submissions, as well as over 1500 clinical trials. The AI Analyst is compatible with various study types, including those related to drug interactions, renal/hepatic impairment, and bioequivalence. In 2022, the Office of Clinical Pharmacology initiated the RealTime Analysis Depot (RAD) project to routinely utilize this AI platform for reviewing New Molecular Entity (NME), 505(b)(2), and 351K submissions.
The Data Infrastructure Backbone for AI Applications project involves the creation of a data lake, referred to as the WILEE knowledgebase, which will integrate and ingest data from various sources to enhance advanced analytics and support risk-based decision-making. The data sources include internal stakeholder submissions, scientific literature from PubMed and NIH, CFSAN-generated data, news articles, and food sales data, among others. The design of the data lake allows for automated data ingestion while also permitting manual curation when necessary. It is structured to facilitate the identification and integration of new data sources as they become available. This centralized data repository will enhance insights into CFSAN-regulated products, food additives, and other relevant substances, ultimately improving knowledge discovery during the review of premarket submissions and post-market monitoring of the U.S. food supply.
The AI Engine for Knowledge Discovery, Post-Market Surveillance, and Signal Detection project aims to enhance the CFSAN’s capabilities in identifying potential issues related to commodities under its jurisdiction. By leveraging artificial intelligence, the project focuses on investigating chronic exposure risks associated with food additives, color additives, food contact substances, and contaminants, as well as the long-term use of cosmetics. The OFAS Warp Intelligent Learning Engine (WILEE) serves as an intelligent knowledge discovery and analytic agent, providing a horizon-scanning solution that analyzes data from the WILEE knowledgebase. This enables the Office to adopt a proactive approach, forecast industry trends, and prepare for potential operational risks, such as changes in USDA regulations. WILEE will facilitate risk-based decision-making by integrating diverse data sources and generating timely reports with actionable insights, significantly improving response times and overall effectiveness.
The Emerging Chemical Hazard Intelligence Platform (ECHIP) is an AI-driven solution developed to identify potential chemical hazards and emerging concerns related to substances of interest for CFSAN. By utilizing data from news sources, social media, and scientific literature, ECHIP enables CFSAN to proactively address stakeholder concerns and potential hazards. Prior to ECHIP, the signal identification and verification process could take 2-4 weeks, depending on the number of scientists involved in reviewing relevant literature. Pilot studies have shown that ECHIP can reduce this process to approximately 2 hours by automatically ingesting, analyzing, and presenting data from multiple sources, thereby streamlining the signal detection and verification workflow.
OSCAR, the Office of Science Customer Assistance Response chatbot, is designed to provide 24/7 support to users seeking assistance from the Customer Service Center. It features a user-friendly interface that allows users to input questions and access previous responses. Additionally, OSCAR includes a dashboard for administrative users, providing key metrics to monitor usage and performance, thereby enhancing customer service efficiency.
The Self-Service Text Analytics Tool (SSTAT) enables users to analyze and explore topics within a collection of documents. Users can submit documents to the tool, which then generates a list of topics and associated keywords. SSTAT automatically produces a visual representation of the documents and their related topics, providing users with a quick overview and facilitating efficient document analysis.
ASSIST4Tobacco is a semantic search system designed to assist stakeholders in the Center for Tobacco Products (CTP) in locating tobacco authorization applications with greater accuracy and efficiency. By leveraging advanced search capabilities, the system enhances the ability of users to find relevant applications, thereby streamlining the review process and improving regulatory oversight.
This project utilizes genomic data and artificial intelligence/machine learning (AI/ML) techniques to investigate antimicrobial resistance (AMR) in pathogens such as Salmonella, E. coli, Campylobacter, and Enterococcus, sourced from retail meats, humans, and food-producing animals. The XGBoost machine learning model is employed to enhance predictions of antimicrobial susceptibility by estimating Minimum Inhibitory Concentrations (MICs) based on whole genome sequencing (WGS) data, thereby improving the understanding and management of AMR.
This project focuses on developing virtual animal models using artificial intelligence (AI) to simulate results from animal studies, which are critical for evaluating the safety of chemicals. As regulatory agencies, including the FDA, move towards the 3Rs principle (reduction, refinement, and replacement) of animal testing, the project proposes an AI-based generative adversarial network (GAN) architecture to learn from existing animal study data. This approach aims to generate relevant data for new and untested chemicals without the need for additional animal experiments. The FDA’s guidelines and frameworks, such as the Predictive Toxicology Roadmap, support the modernization of toxicity assessments through alternative methods, ultimately enhancing the FDA’s predictive capabilities and facilitating drug development while minimizing animal testing.
This proposal addresses the growing concerns about bias in artificial intelligence (AI) systems used in biomedical sciences, particularly in the context of natural language processing (NLP) applied to drug labeling documents. The project aims to conduct a comprehensive study to assess potential biases that may arise when AI models trained on diverse datasets are applied to new domains. By understanding these biases, the initiative seeks to develop strategies to mitigate them, ensuring that AI applications in document analysis for FDA reviews are fair and accurate, ultimately enhancing the integrity of regulatory processes.
This project focuses on identifying sex disparities in opioid drug safety signals by analyzing data from the FDA Adverse Events Report Systems (FAERS) and social media platforms like Twitter. The initiative aims to address the Office of Women’s Health (OWH) 2023 priority area by examining differences in adverse events related to opioid drugs between genders. By comparing findings from FAERS and Twitter, the project seeks to determine whether social media can serve as an early warning system for opioid-related issues affecting women. The insights gained from this analysis could contribute to improving women’s health outcomes in the context of opioid use.
This project aims to predict adverse events associated with drug interactions by utilizing drug-endogenous ligand-target networks and advanced machine learning methods. Molecular similarity has been a valuable tool in various fields, including virtual screening and toxicology, but predicting toxicological responses remains complex due to the involvement of multiple pathways and protein targets. The project focuses on developing a universal molecular modeling approach that employs unique three-dimensional fingerprints to capture the steric and electrostatic interactions between ligands and receptors. By quantifying both structural and functional similarities, this approach aims to enhance the prediction of adverse events from AI-generated networks, potentially revealing new insights into mechanisms of toxicity.
This project focuses on developing predictive toxicology models to assess drug placental permeability, which is crucial for ensuring fetal safety during pregnancy. The human placenta facilitates the transfer of various substances through mechanisms such as passive diffusion and active transport. The project aims to utilize three-dimensional molecular similarities of endogenous placental transporter ligands to known drug substrates to identify the most likely mode of drug transportation. By building predictive models that link molecular characteristics to placental permeability, the initiative seeks to enhance the understanding of how drugs interact with placental transporters. Data will be gathered from literature mining, CDER databases, and empirical assessments using in vitro models, with validation conducted through blind test sets and small-scale studies of drugs with unknown permeabilities.
The Opioid Agonists/Antagonists Knowledgebase (OAK) project aims to address the rising opioid overdose deaths in the United States by supporting the development of abuse-deterrent analgesic products and innovative treatments for opioid use disorder (OUD). The project will curate experimental data on opioid agonist and antagonist activities from public sources and conduct functional opioid receptor assays on approximately 2800 drugs using a quantitative high-throughput screening (qHTS) platform. Additionally, the initiative will develop and validate in silico models to predict opioid activity. The OAK knowledgebase will serve as a valuable resource for FDA reviewers, providing access to experimental data and protocols, and enabling read-across methods for estimating activity in chemicals lacking experimental data. This comprehensive approach aims to inform regulatory reviews and facilitate the development of safer analgesics and treatments for OUD.
The project aims to create a comprehensive open-access resource called the Molecules with Androgenic Activity Resource (MAAR) to facilitate the assessment of chemicals for androgenic activity. The androgen receptor (AR) is crucial for evaluating drug safety and chemical risk, as it can be both a target and an off-target for various substances. Currently, existing data on androgenic activity is scattered across multiple sources and formats, hindering its usability. MAAR will consolidate this data and provide predictive models that adhere to the FAIR principles (Findable, Accessible, Interoperable, and Reusable). This resource will enhance research capabilities and support regulatory decision-making regarding the efficacy and safety of FDA-regulated products.
This project focuses on leveraging artificial intelligence (AI) and natural language processing (NLP) to analyze FDA labeling documents, which are often unstructured and lack standardization. The study aims to utilize advanced language models, such as BERT and BioBERT, to extract meaningful information from over 120,000 FDA drug labeling documents. Key areas of investigation include interpreting and classifying drug properties (safety and efficacy), summarizing text to highlight important sections, conducting automatic anomaly analysis for signal identification, and enhancing information retrieval through a question-and-answer format. The project will compare AI-based NLP approaches with traditional MedDRA methods to improve drug safety and efficacy assessments. Ultimately, the findings will establish benchmarks for applying public language models to FDA documents and support the future development of the FDA Label tool used in the Center for Drug Evaluation and Research (CDER) review process.
This project aims to utilize big data analytics and artificial intelligence to inform the selection of drugs for treating COVID-19. Given the global health crisis, with millions infected and significant mortality rates, there is an urgent need to repurpose existing drugs for effective treatment. The project will mine adverse drug event data from various sources, including public databases and social media, to gather safety information on potential repurposed drugs. The ultimate goal is to provide comprehensive adverse event data that will facilitate the safety evaluation of these drugs, helping to identify the most suitable candidates for repurposing and ensuring that the right patients are selected for treatment, thereby enhancing efforts to combat the pandemic.
This project focuses on advancing the understanding and application of explainable artificial intelligence (AI) in regulatory contexts. As AI technologies become more prevalent, the FDA faces challenges in assessing AI-centric products and implementing AI methods to enhance its operations. A key aspect of this initiative is to explore the interpretability of AI models, which often lacks quantitative metrics and can be subjective. The project will investigate various AI methods, evaluating their performance and interpretability using established benchmark datasets and extending the analysis to clinical and pre-clinical datasets. The findings will provide essential parameters and guidance for developing explainable AI models, ultimately facilitating informed decision-making in regulatory settings.
This project aims to investigate sex differences in cardiovascular risks associated with prescription opioid use (POU) through big data analysis. POU can lead to various adverse effects across different body systems, and significant sex differences have been noted in cardiac outcomes. The study will develop a novel statistical model to identify safety signals while considering gender as a variable, addressing limitations of existing FDA data mining methods. By analyzing real-world evidence from electronic health records (EHRs) and employing AI tools, the project seeks to uncover sex-dependent risk factors for cardiotoxicity related to POU. This initiative aligns with the FDA’s strategic priorities to reduce addiction crises and enhance women’s health research, ultimately providing valuable insights for drug reviewers and healthcare providers to mitigate cardiovascular risks in women using POU.
This collaborative project aims to enhance the Investigational New Drug (IND) review process by utilizing artificial intelligence (AI) and machine learning (ML) to develop animal-free models for toxicity assessments. The initiative focuses on identifying safety biomarkers from non-animal assays and predicting safety outcomes based on chemical structure data. Deep learning (DL), a sophisticated subset of ML, will be employed to improve the identification of safety concerns related to drug-induced liver injury (DILI) and carcinogenicity. By leveraging DL’s advanced capabilities, the project seeks to streamline the IND review process and reduce reliance on animal testing, ultimately improving drug safety evaluations.
The individual Functional Activity Composite Tool (inFACT) is designed to support the Social Security Administration (SSA) in the disability determination process. It assists adjudicators by extracting and presenting relevant functional evidence from extensive case records, which can span hundreds or thousands of pages. inFACT organizes and displays information regarding an individual’s overall functional capabilities, derived from free-text medical records, and aligns this data with key business elements, thereby streamlining the review process and enhancing decision-making efficiency.
The Assisted Referral Tool is designed to aid in the assignment of relevant scientific areas for grant applications. By streamlining the referral process, the tool ensures that applications are directed to the appropriate scientific domains, enhancing the efficiency and accuracy of grant management and review.
NanCI (Connecting Scientists) is an AI-driven platform that helps users discover scientific content aligned with their interests. Users can collect research papers into a folder and utilize the tool to find similar articles in the literature. The platform allows users to refine recommendations through up or down voting, enhancing the personalization of content. Additionally, NanCI facilitates networking among users with shared interests, enabling them to connect and exchange recommendations within a scientific social network.
This project involves the development of a tool that employs natural language processing (NLP) and machine learning to assess incoming grant applications for their focus on Implementation Science (IS). The tool calculates an IS score, which predicts whether a grant proposal aligns with the principles of Implementation Science, a relatively new field. The National Heart, Lung, and Blood Institute (NHLBI) utilizes this IS score to inform decisions regarding the assignment of applications to specific divisions for effective grants management and oversight.
The Federal IT Acquisition Reform Act (FITARA) Tool is designed to streamline the identification of IT-related contracts within the National Institute of Allergy and Infectious Diseases (NIAID). By automating this process, the tool enhances efficiency and accuracy in tracking and managing IT acquisitions, ensuring compliance with federal regulations.
The DAIT AIDS-Related Research Solution employs natural language processing (NLP) and classification algorithms to analyze incoming grant applications. It predicts the priority level (high, medium, low) and identifies the relevant research area for each application. By ranking applications based on these predictions, the tool helps prioritize higher-ranked applications for review, thereby optimizing the grant evaluation process.
The Scientific Research Data Management System’s Conflict of Interest Tool utilizes natural language processing (NLP) techniques, such as optical character recognition (OCR) and text extraction, to identify entities within grant applications. This functionality assists the NIAID’s Scientific Review Program team in efficiently detecting potential conflicts of interest (COI) between grant reviewers and applicants, thereby enhancing the integrity of the review process.
The Tuberculosis (TB) Case Browser Image Text Detection tool is designed to identify text within images that may contain Personally Identifiable Information (PII) or Protected Health Information (PHI) in TB-related portals. By detecting such sensitive information, the tool helps ensure compliance with privacy regulations and protects patient confidentiality.
The Research Area Tracking Tool is a dashboard that leverages machine learning algorithms to identify and track projects within designated high-priority research areas. This tool enhances visibility into ongoing research efforts, facilitating better resource allocation and strategic planning within the organization.
The NIDCR Digital Transformation Initiative (DTI) aims to develop a natural language processing (NLP) chatbot that enhances operational efficiency, transparency, and consistency for employees at the National Institute of Dental and Craniofacial Research (NIDCR). This chatbot will serve as a resource for employees, streamlining communication and information retrieval within the organization.
The NIDCR Data Bank project enables intramural research program investigators to transfer large volumes of unstructured data into a scalable cloud archival storage solution. This system is designed to be cost-effective and includes robust metadata management for governance purposes. Additionally, it facilitates secondary and tertiary data analysis opportunities by leveraging advanced cognitive services, including artificial intelligence (AI), machine learning (ML), and natural language processing (NLP) toolsets.
The Automated Approaches for Table Extraction project focuses on creating a model-based automation process to streamline the extraction of data from published tables. Given that data tables often contain rich and critical information, this tool significantly reduces the time and effort required for manual data extraction, enhancing efficiency in data analysis and research.
The SWIFT Active Screener employs statistical models to optimize the literature screening process for the Division of Translational Toxicology. By utilizing active learning techniques and incorporating user feedback, the tool automatically prioritizes studies, thereby saving screeners time and effort while enhancing the efficiency of evidence evaluations.
The Clinical Trial Predictor is a sophisticated tool that employs a combination of natural language processing (NLP) and machine learning algorithms to analyze the text of research applications. By examining titles, abstracts, narratives, specific aims, and research strategies, the tool predicts whether the applications are likely to involve clinical trials. This predictive capability aids in the efficient review and categorization of research proposals.
The JIT Automated Calculator (JAC) is a tool that utilizes natural language processing (NLP) to analyze Just-In-Time (JIT) Other Support forms submitted by principal investigators (PIs). By parsing these forms, the JAC determines the amount of external support that PIs are receiving from sources other than the pending application. This functionality enhances transparency and helps in evaluating the overall funding landscape for research projects.
The Similarity-based Application and Investigator Matching (SAIM) system employs natural language processing (NLP) to identify grants awarded to National Institute of General Medical Sciences (NIGMS) Principal Investigators that are funded by non-NIH sources. This tool helps assess whether a new grant application overlaps significantly with existing grants from other agencies, thereby promoting efficient resource allocation and reducing redundancy in funding.
The project focuses on improving the accessibility of Adobe .pdf documents to meet Section 508 standards, which ensure that electronic and information technology is accessible to people with disabilities. The National Library of Medicine (NLM) is exploring the use of artificial intelligence (AI) to remediate existing .pdf files that do not comply with these standards. By enhancing the accessibility of these documents, the initiative aims to better serve individuals who rely on assistive technologies, such as those who are blind or visually impaired.
The MEDIQA project focuses on automating the process of question answering in the biomedical field using artificial intelligence (AI) techniques. By leveraging both traditional and neural machine learning approaches, the project aims to address a diverse array of biomedical information needs. The goal is to enhance user access to National Library of Medicine (NLM) resources through a single entry point, streamlining the retrieval of relevant information for various users.
The CLARIN project aims to analyze clinical notes to detect clinicians’ attitudes, emotions, and potential biases. By employing artificial intelligence (AI) techniques, the project seeks to enhance understanding of how clinician sentiments may impact patient care and decision-making. This initiative supports efforts to promote equity and diversity in healthcare while improving the overall quality of care provided to patients.
The Best Match project introduces a new relevance search algorithm for PubMed, designed to enhance the user experience in finding biomedical literature. As the volume of published research continues to grow, retrieving the most relevant papers for specific queries has become increasingly difficult. The Best Match algorithm utilizes user intelligence and advanced machine learning techniques to prioritize search results based on relevance rather than the traditional date sort order, improving the efficiency of literature searches for millions of users.
The SingleCite project enhances the single citation search functionality in PubMed, which is crucial for users seeking specific documents in scholarly databases. The automated algorithm developed for SingleCite establishes a mapping between queries and documents by employing a regression function that predicts the likelihood of a retrieved document being the target. This prediction is based on three key variables: the score of the highest-scoring document, the score difference between the top two documents, and the fraction of the query matched by the candidate citation. SingleCite has demonstrated superior performance in benchmarking tests and is particularly effective in rescuing queries that would otherwise fail to retrieve relevant results.
The Computed Author tool addresses the challenge of author name ambiguity in PubMed, where multiple authors may share the same name, leading to irrelevant search results. The National Library of Medicine (NLM) developed a machine learning method that scores features to disambiguate pairs of papers with ambiguous author names. By employing agglomerative clustering, the tool groups all papers belonging to the same authors based on these classifications. The disambiguation process has been validated through manual verification, demonstrating higher accuracy than existing methods. This tool has been integrated into PubMed to enhance the efficiency of author name searches.
NLM-Gene is an innovative tool developed to automate the gene indexing process within PubMed articles, which is currently done manually by expert indexers. This tool utilizes advanced natural language processing (NLP) and deep learning techniques to identify gene names in biomedical literature, significantly reducing the time and resources required for indexing. The performance of NLM-Gene has been evaluated using gold-standard datasets, and it is set to be integrated into the MEDLINE indexing pipeline, enhancing literature retrieval and information access.
NLM-Chem is a tool designed to automate the chemical indexing process for PubMed articles, which is currently a manual task performed by expert indexers. By employing advanced natural language processing (NLP) and deep learning methods, NLM-Chem efficiently identifies chemical names in biomedical literature. Its effectiveness has been validated against gold-standard evaluation datasets, and it is scheduled for integration into the MEDLINE indexing pipeline, thereby improving the efficiency of literature retrieval and access to chemical information.
The Biomedical Citation Selector (BmCS) automates the article selection process for the National Library of Medicine (NLM), enhancing the efficiency and effectiveness of indexing and hosting relevant information for public access. By standardizing the selection process through automation, BmCS significantly reduces the time required to process MEDLINE articles, thereby improving the overall workflow and accessibility of biomedical literature.
MTIX is a machine learning-based system designed to automate the indexing of MEDLINE articles with Medical Subject Headings (MeSH) terms. Utilizing a multi-stage neural text ranking approach, MTIX enhances the efficiency of the indexing process, allowing for cost-effective and timely categorization of articles. This automation not only streamlines the indexing workflow but also improves the accessibility of biomedical literature for researchers and the public.
The ClinicalTrials.gov Protocol Registration and Results System Review Assistant is a research initiative focused on evaluating the potential of artificial intelligence (AI) to enhance the efficiency and effectiveness of reviewing study records. By exploring AI integration, the project aims to streamline the review process for clinical trial protocols and results, ultimately improving the management and accessibility of clinical trial information.
MetaMap is a powerful tool that connects biomedical text to concepts within the Unified Medical Language System (UMLS) Metathesaurus. By utilizing natural language processing (NLP), MetaMap links the text found in biomedical literature to the underlying knowledge, including synonym relationships, contained in the Metathesaurus. The program offers a flexible architecture for exploring various mapping strategies and their applications, and it is used by the Medical Text Indexer (MTI) to generate potential indexing terms, enhancing the indexing process for biomedical literature.
The HIV-related grant classifier tool is a user-friendly application designed for scientific staff to input grant information and automatically classify grants related to HIV research. The tool employs an automated algorithm to categorize the grants, and it features interactive data visualizations, including heat maps created with the Plotly Python library. These visualizations display the confidence levels of the predicted classifications, enhancing the analysis and management of HIV-related funding opportunities.
This project focuses on developing automated methods for analyzing scientific topics using natural language processing (NLP) and artificial intelligence/machine learning (AI/ML) techniques. The approach groups semantically similar documents, such as grants, publications, and patents, and extracts AI-generated labels that accurately represent the scientific focus of each topic. This automated analysis aids in the evaluation and management of the National Institutes of Health (NIH) research portfolio, facilitating better insights into research trends and funding allocations.
The Identification of Emerging Areas project utilizes artificial intelligence (AI) and machine learning (ML) to analyze the age and rate of progress of various research topics within NIH portfolios. By assessing these metrics, the project can identify emerging areas of research on a large scale, thereby facilitating the acceleration of scientific progress and enabling more strategic research investments.
The Person-Level Disambiguation project focuses on accurately attributing grants, articles, and other research outputs to individual researchers, which is essential for conducting high-quality analyses. This enhanced disambiguation method improves the identification of authors in PubMed articles and NIH grant applications, thereby supporting data-driven decision-making processes and ensuring that researchers receive appropriate credit for their work.
The Prediction of Transformative Breakthroughs initiative aims to enhance the pace of scientific discovery by predicting significant breakthroughs in biomedicine. By analyzing co-citation networks, the project has identified a common signature that can forecast breakthroughs more than five years before they are officially published. This predictive capability not only improves the efficiency of research investments but also has led to a patent application (U.S. Patent Application No. 63/257,818) for the methodology used in this approach.
The Machine Learning Pipeline for Mining Citations project, developed by the NIH Office of Portfolio Analysis, automates the identification of freely available scientific articles online that do not require a library subscription. The pipeline processes full-text PDFs, converting them to XML format, and employs a Long Short-Term Memory (LSTM) recurrent neural network to differentiate between reference text and other content within the articles. The references identified by the LSTM model are then processed through the Citation Resolution Service, enhancing the accessibility and usability of scientific literature. For further details, refer to the publication by Hutchins et al. (2019).
The Machine Learning System for Predicting Translational Progress in Biomedical Research is designed to assess whether a research paper is likely to be cited in future clinical trials or guidelines. By analyzing early reactions from the scientific community, this system can provide real-time predictions of translational progress in biomedicine. This capability enhances the understanding of how research impacts clinical practice and policy. For more information, see the publication by Hutchins et al. (2019).
The Research, Condition, and Disease Categorization (RCDC) AI Validation Tool is designed to enhance the accuracy and completeness of RCDC categories, which are essential for public reporting of health data. By ensuring that these categories are correctly assigned, the tool supports transparency and reliability in the dissemination of research findings and health information.
The Internal Referral Module (IRM) initiative leverages artificial intelligence (AI) and natural language processing (NLP) to automate the prediction of grant applications directed to NIH Institutes and Centers (ICs). By streamlining this manual process, the IRM enhances the ability of Program Officers to make informed decisions regarding grant applications, ultimately improving the efficiency of the review process.
The NIH Grants Virtual Assistant is a chatbot designed to help users navigate and find grant-related information through the Office of Extramural Research (OER) resources. By providing immediate assistance and information, the chatbot enhances user experience and accessibility to vital grant information, facilitating the grant application process.
The Tool for Natural Gas Procurement Planning enables the NIH to develop a strategic procurement plan for natural gas. By utilizing current long-term forecasts, the tool helps set realistic price targets, ensuring that the NIH can effectively manage its energy costs and procurement strategies.
The NIH Campus Cooling Load Forecaster project is designed to predict the chilled water demand for the NIH campus over the next four days. This forecasting capability allows the management of the NIH Central Utilities Plant to effectively plan and optimize the operation and maintenance of the chiller plant, ensuring efficient energy use and reliable cooling for campus facilities.
The NIH Campus Steam Demand Forecaster project is designed to predict the steam demand for the NIH campus over the next four days. By providing accurate forecasts, this tool enables stakeholders at the NIH Central Utilities Plant to effectively plan and optimize the operation and maintenance of the steam system, ensuring efficient energy use and reliable service delivery.
The Chiller Plant Optimization project focuses on enhancing the efficiency of the chilled water production process at the NIH campus. By implementing strategies to reduce energy consumption, this initiative aims to lower operational costs and minimize the environmental impact of cooling systems, contributing to more sustainable campus operations.
The Natural Language Processing Tool for Open Text Analysis is designed to enhance facility readiness and minimize downtime by enabling the analysis of previously inaccessible data contained in open text formats. By unlocking this data, the tool allows for better decision-making and operational efficiency across various departments.
The Contracts and Grants Analytics Portal is an AI-driven tool that significantly improves the ability of HHS Office of Inspector General (OIG) staff to access and analyze grants-related data. It allows users to quickly navigate to relevant findings from thousands of audits, discover similar findings, analyze trends, compare data across operating divisions (OPDIVs), and assess potential anomalies among grantees. This enhanced accessibility and analytical capability supports more informed decision-making and oversight.
The Text Analytics Portal is designed to empower personnel without a background in analytics to efficiently examine text documents. By utilizing a suite of technologies, including search functions, topic modeling, and entity recognition, the portal simplifies the analysis process. The initial implementation focuses on specific use cases relevant to the HHS Office of Inspector General (OIG), enhancing the ability to extract insights from text data.
At Scry Analytics Inc ("us", "we", "our" or the "Company") we value your privacy and the importance of safeguarding your data. This Privacy Policy (the "Policy") describes our privacy practices for the activities set out below. As per your rights, we inform you how we collect, store, access, and otherwise process information relating to individuals. In this Policy, personal data (“Personal Data”) refers to any information that on its own, or in combination with other available information, can identify an individual.
We are committed to protecting your privacy in accordance with the highest level of privacy regulation. As such, we follow the obligations under the below regulations:
This policy applies to the Scry Analytics, Inc. websites, domains, applications, services, and products.
This Policy does not apply to third-party applications, websites, products, services or platforms that may be accessed through (non-) links that we may provide to you. These sites are owned and operated independently from us, and they have their own separate privacy and data collection practices. Any Personal Data that you provide to these websites will be governed by the third-party’s own privacy policy. We cannot accept liability for the actions or policies of these independent sites, and we are not responsible for the content or privacy practices of such sites.
This Policy applies when you interact with us by doing any of the following:
What Personal Data We Collect
When attempt to contact us or make a purchase, we collect the following types of Personal Data:
This includes:
Account Information such as your name, email address, and password
Automated technologies or interactions: As you interact with our website, we may automatically collect the following types of data (all as described above): Device Data about your equipment, Usage Data about your browsing actions and patterns, and Contact Data where tasks carried out via our website remain uncompleted, such as incomplete orders or abandoned baskets. We collect this data by using cookies, server logs and other similar technologies. Please see our Cookie section (below) for further details.
If you provide us, or our service providers, with any Personal Data relating to other individuals, you represent that you have the authority to do so and acknowledge that it will be used in accordance with this Policy. If you believe that your Personal Data has been provided to us improperly, or to otherwise exercise your rights relating to your Personal Data, please contact us by using the information set out in the “Contact us” section below.
When you visit a Scry Analytics, Inc. website, we automatically collect and store information about your visit using browser cookies (files which are sent by us to your computer), or similar technology. You can instruct your browser to refuse all cookies or to indicate when a cookie is being sent. The Help Feature on most browsers will provide information on how to accept cookies, disable cookies or to notify you when receiving a new cookie. If you do not accept cookies, you may not be able to use some features of our Service and we recommend that you leave them turned on.
We also process information when you use our services and products. This information may include:
We may receive your Personal Data from third parties such as companies subscribing to Scry Analytics, Inc. services, partners and other sources. This Personal Data is not collected by us but by a third party and is subject to the relevant third party’s own separate privacy and data collection policies. We do not have any control or input on how your Personal Data is handled by third parties. As always, you have the right to review and rectify this information. If you have any questions you should first contact the relevant third party for further information about your Personal Data.
Our websites and services may contain links to other websites, applications and services maintained by third parties. The information practices of such other services, or of social media networks that host our branded social media pages, are governed by third parties’ privacy statements, which you should review to better understand those third parties’ privacy practices.
We collect and use your Personal Data with your consent to provide, maintain, and develop our products and services and understand how to improve them.
These purposes include:
Where we process your Personal Data to provide a product or service, we do so because it is necessary to perform contractual obligations. All of the above processing is necessary in our legitimate interests to provide products and services and to maintain our relationship with you and to protect our business for example against fraud. Consent will be required to initiate services with you. New consent will be required if any changes are made to the type of data collected. Within our contract, if you fail to provide consent, some services may not be available to you.
Where possible, we store and process data on servers within the general geographical region where you reside (note: this may not be within the country in which you reside). Your Personal Data may also be transferred to, and maintained on, servers residing outside of your state, province, country or other governmental jurisdiction where the data laws may differ from those in your jurisdiction. We will take appropriate steps to ensure that your Personal Data is treated securely and in accordance with this Policy as well as applicable data protection law.Data may be kept in other countries that are considered adequate under your laws.
We will share your Personal Data with third parties only in the ways set out in this Policy or set out at the point when the Personal Data is collected.
We also use Google Analytics to help us understand how our customers use the site. You can read more about how Google uses your Personal Data here: Google Privacy Policy
You can also opt-out of Google Analytics here: https://tools.google.com/dlpage/gaoptout
We may use or disclose your Personal Data in order to comply with a legal obligation, in connection with a request from a public or government authority, or in connection with court or tribunal proceedings, to prevent loss of life or injury, or to protect our rights or property. Where possible and practical to do so, we will tell you in advance of such disclosure.
We may use a third party service provider, independent contractors, agencies, or consultants to deliver and help us improve our products and services. We may share your Personal Data with marketing agencies, database service providers, backup and disaster recovery service providers, email service providers and others but only to maintain and improve our products and services. For further information on the recipients of your Personal Data, please contact us by using the information in the “Contacting us” section below.
A cookie is a small file with information that your browser stores on your device. Information in this file is typically shared with the owner of the site in addition to potential partners and third parties to that business. The collection of this information may be used in the function of the site and/or to improve your experience.
To give you the best experience possible, we use the following types of cookies: Strictly Necessary. As a web application, we require certain necessary cookies to run our service.
We use preference cookies to help us remember the way you like to use our service. Some cookies are used to personalize content and present you with a tailored experience. For example, location could be used to give you services and offers in your area. Analytics. We collect analytics about the types of people who visit our site to improve our service and product.
So long as the cookie is not strictly necessary, you may opt in or out of cookie use at any time. To alter the way in which we collect information from you, visit our Cookie Manager.
A cookie is a small file with information that your browser stores on your device. Information in this file is typically shared with the owner of the site in addition to potential partners and third parties to that business. The collection of this information may be used in the function of the site and/or to improve your experience.
So long as the cookie is not strictly necessary, you may opt in or out of cookie use at any time. To alter the way in which we collect information from you, visit our Cookie Manager.
We will only retain your Personal Data for as long as necessary for the purpose for which that data was collected and to the extent required by applicable law. When we no longer need Personal Data, we will remove it from our systems and/or take steps to anonymize it.
If we are involved in a merger, acquisition or asset sale, your personal information may be transferred. We will provide notice before your personal information is transferred and becomes subject to a different Privacy Policy. Under certain circumstances, we may be required to disclose your personal information if required to do so by law or in response to valid requests by public authorities (e.g. a court or a government agency).
We have appropriate organizational safeguards and security measures in place to protect your Personal Data from being accidentally lost, used or accessed in an unauthorized way, altered or disclosed. The communication between your browser and our website uses a secure encrypted connection wherever your Personal Data is involved. We require any third party who is contracted to process your Personal Data on our behalf to have security measures in place to protect your data and to treat such data in accordance with the law. In the unfortunate event of a Personal Data breach, we will notify you and any applicable regulator when we are legally required to do so.
We do not knowingly collect Personal Data from children under the age of 18 Years.
Depending on your geographical location and citizenship, your rights are subject to local data privacy regulations. These rights may include:
Right to Access (PIPEDA, GDPR Article 15, CCPA/CPRA, CPA, VCDPA, CTDPA, UCPA, LGPD, POPIA)
You have the right to learn whether we are processing your Personal Data and to request a copy of the Personal Data we are processing about you.
Right to Rectification (PIPEDA, GDPR Article 16, CPRA, CPA, VCDPA, CTDPA, LGPD, POPIA)
You have the right to have incomplete or inaccurate Personal Data that we process about you rectified.
Right to be Forgotten (right to erasure) (GDPR Article 17, CCPA/CPRA, CPA, VCDPA, CTDPA, UCPA, LGPD, POPIA)
You have the right to request that we delete Personal Data that we process about you, unless we need to retain such data in order to comply with a legal obligation or to establish, exercise or defend legal claims.
Right to Restriction of Processing (GDPR Article 18, LGPD)
You have the right to restrict our processing of your Personal Data under certain circumstances. In this case, we will not process your Data for any purpose other than storing it.
Right to Portability (PIPEDA, GDPR Article 20, LGPD)
You have the right to obtain Personal Data we hold about you, in a structured, electronic format, and to transmit such Personal Data to another data controller, where this is (a) Personal Data which you have provided to us, and (b) if we are processing that data on the basis of your consent or to perform a contract with you or the third party that subscribes to services.
Right to Opt Out (CPRA, CPA, VCDPA, CTDPA, UCPA)
You have the right to opt out of the processing of your Personal Data for purposes of: (1) Targeted advertising; (2) The sale of Personal Data; and/or (3) Profiling in furtherance of decisions that produce legal or similarly significant effects concerning you. Under CPRA, you have the right to opt out of the sharing of your Personal Data to third parties and our use and disclosure of your Sensitive Personal Data to uses necessary to provide the products and services reasonably expected by you.
Right to Objection (GDPR Article 21, LGPD, POPIA)
Where the legal justification for our processing of your Personal Data is our legitimate interest, you have the right to object to such processing on grounds relating to your particular situation. We will abide by your request unless we have compelling legitimate grounds for processing which override your interests and rights, or if we need to continue to process the Personal Data for the establishment, exercise or defense of a legal claim.
Nondiscrimination and nonretaliation (CCPA/CPRA, CPA, VCDPA, CTDPA, UCPA)
You have the right not to be denied service or have an altered experience for exercising your rights.
File an Appeal (CPA, VCDPA, CTDPA)
You have the right to file an appeal based on our response to you exercising any of these rights. In the event you disagree with how we resolved the appeal, you have the right to contact the attorney general located here:
If you are based in Colorado, please visit this website to file a complaint. If you are based in Virginia, please visit this website to file a complaint. If you are based in Connecticut, please visit this website to file a complaint.
File a Complaint (GDPR Article 77, LGPD, POPIA)
You have the right to bring a claim before their competent data protection authority. If you are based in the EEA, please visit this website (http://ec.europa.eu/newsroom/article29/document.cfm?action=display&doc_id=50061) for a list of local data protection authorities.
If you have consented to our processing of your Personal Data, you have the right to withdraw your consent at any time, free of charge, such as where you wish to opt out from marketing messages that you receive from us. If you wish to withdraw your consent, please contact us using the information found at the bottom of this page.
You can make a request to exercise any of these rights in relation to your Personal Data by sending the request to our privacy team by using the form below.
For your own privacy and security, at our discretion, we may require you to prove your identity before providing the requested information.
We may modify this Policy at any time. If we make changes to this Policy then we will post an updated version of this Policy at this website. When using our services, you will be asked to review and accept our Privacy Policy. In this manner, we may record your acceptance and notify you of any future changes to this Policy.
To request a copy for your information, unsubscribe from our email list, request for your data to be deleted, or ask a question about your data privacy, we've made the process simple:
Our aim is to keep this Agreement as readable as possible, but in some cases for legal reasons, some of the language is required "legalese".
These terms of service are entered into by and between You and Scry Analytics, Inc., ("Company," "we," "our," or "us"). The following terms and conditions, together with any documents they expressly incorporate by reference (collectively "Terms of Service"), govern your access to and use of www.scryai.com, including any content, functionality, and services offered on or through www.scryai.com (the "Website").
Please read the Terms of Service carefully before you start to use the Website.
By using the Website [or by clicking to accept or agree to the Terms of Service when this option is made available to you], you accept and agree to be bound and abide by these Terms of Service and our Privacy Policy, found at Privacy Policy, incorporated herein by reference. If you do not want to agree to these Terms of Service, you must not access or use the Website.
Accept and agree to be bound and comply with these terms of service. You represent and warrant that you are the legal age of majority under applicable law to form a binding contract with us and, you agree if you access the website from a jurisdiction where it is not permitted, you do so at your own risk.
We may revise and update these Terms of Service from time to time in our sole discretion. All changes are effective immediately when we post them and apply to all access to and use of the Website thereafter.
Continuing to use the Website following the posting of revised Terms of Service means that you accept and agree to the changes. You are expected to check this page each time you access this Website so you are aware of any changes, as they are binding on you.
You are required to ensure that all persons who access the Website are aware of this Agreement and comply with it. It is a condition of your use of the Website that all the information you provide on the Website is correct, current, and complete.
You are solely and entirely responsible for your use of the website and your computer, internet and data security.
You may use the Website only for lawful purposes and in accordance with these Terms of Service. You agree not to use the Website:
The Website and its entire contents, features, and functionality (including but not limited to all information, software, text, displays, images, video, and audio, and the design, selection, and arrangement thereof) are owned by the Company, its licensors, or other providers of such material and are protected by United States and international copyright, trademark, patent, trade secret, and other intellectual property or proprietary rights laws.
These Terms of Service permit you to use the Website for your personal, non-commercial use only. You must not reproduce, distribute, modify, create derivative works of, publicly display, publicly perform, republish, download, store, or transmit any of the material on our Website, except as follows:
You must not access or use for any commercial purposes any part of the website or any services or materials available through the Website.
If you print, copy, modify, download, or otherwise use or provide any other person with access to any part of the Website in breach of the Terms of Service, your right to use the Website will stop immediately and you must, at our option, return or destroy any copies of the materials you have made. No right, title, or interest in or to the Website or any content on the Website is transferred to you, and all rights not expressly granted are reserved by the Company. Any use of the Website not expressly permitted by these Terms of Service is a breach of these Terms of Service and may violate copyright, trademark, and other laws.
The Website may provide you with the opportunity to create, submit, post, display, transmit, public, distribute, or broadcast content and materials to us or in the Website, including but not limited to text, writings, video, audio, photographs, graphics, comments, ratings, reviews, feedback, or personal information or other material (collectively, "Content"). You are responsible for your use of the Website and for any content you provide, including compliance with applicable laws, rules, and regulations.
All User Submissions must comply with the Submission Standards and Prohibited Activities set out in these Terms of Service.
Any User Submissions you post to the Website will be considered non-confidential and non-proprietary. By submitting, posting, or displaying content on or through the Website, you grant us a worldwide, non-exclusive, royalty-free license to use, copy, reproduce, process, disclose, adapt, modify, publish, transmit, display and distribute such Content for any purpose, commercial advertising, or otherwise, and to prepare derivative works of, or incorporate in other works, such as Content, and grant and authorize sublicenses of the foregoing. The use and distribution may occur in any media format and through any media channels.
We do not assert any ownership over your Content. You retain full ownership of all of your Content and any intellectual property rights or other proprietary rights associated with your Content. We are not liable for any statement or representations in your Content provided by you in any area in the Website. You are solely responsible for your Content related to the Website and you expressly agree to exonerate us from any and all responsibility and to refrain from any legal action against us regarding your Content. We are not responsible or liable to any third party for the content or accuracy of any User Submissions posted by you or any other user of the Website. User Submissions are not endorsed by us and do not necessarily represent our opinions or the view of any of our affiliates or partners. We do not assume liability for any User Submission or for any claims, liabilities, or losses resulting from any review.
We have the right, in our sole and absolute discretion, (1) to edit, redact, or otherwise change any Content; (2) to recategorize any Content to place them in more appropriate locations in the Website; and (3) to prescreen or delete any Content at any time and for any reason, without notice. We have no obligation to monitor your Content. Any use of the Website in violation of these Terms of Service may result in, among other things, termination or suspension of your right to use the Website.
These Submission Standards apply to any and all User Submissions. User Submissions must in their entirety comply with all the applicable federal, state, local, and international laws and regulations. Without limiting the foregoing, User Submissions must not:
We have the right, without provision of notice to:
You waive and hold harmless company and its parent, subsidiaries, affiliates, and their respective directors, officers, employees, agents, service providers, contractors, licensors, licensees, suppliers, and successors from any and all claims resulting from any action taken by the company and any of the foregoing parties relating to any, investigations by either the company or by law enforcement authorities.
For your convenience, this Website may provide links or pointers to third-party sites or third-party content. We make no representations about any other websites or third-party content that may be accessed from this Website. If you choose to access any such sites, you do so at your own risk. We have no control over the third-party content or any such third-party sites and accept no responsibility for such sites or for any loss or damage that may arise from your use of them. You are subject to any terms and conditions of such third-party sites.
This Website may provide certain social media features that enable you to:
You may use these features solely as they are provided by us and solely with respect to the content they are displayed with. Subject to the foregoing, you must not:
The Website from which you are linking, or on which you make certain content accessible, must comply in all respects with the Submission Standards set out in these Terms of Service.
You agree to cooperate with us in causing any unauthorized framing or linking immediately to stop.
We reserve the right to withdraw linking permission without notice.
We may disable all or any social media features and any links at any time without notice in our discretion.
You understand and agree that your use of the website, its content, and any goods, digital products, services, information or items found or attained through the website is at your own risk. The website, its content, and any goods, services, digital products, information or items found or attained through the website are provided on an "as is" and "as available" basis, without any warranties or conditions of any kind, either express or implied including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, or non-infringement. The foregoing does not affect any warranties that cannot be excluded or limited under applicable law.
You acknowledge and agree that company or its respective directors, officers, employees, agents, service providers, contractors, licensors, licensees, suppliers, or successors make no warranty, representation, or endorsement with respect to the completeness, security, reliability, suitability, accuracy, currency, or availability of the website or its contents or that any goods, services, digital products, information or items found or attained through the website will be accurate, reliable, error-free, or uninterrupted, that defects will be corrected, that our website or the server that makes it available or content are free of viruses or other harmful components or destructive code.
Except where such exclusions are prohibited by law, in no event shall the company nor its respective directors, officers, employees, agents, service providers, contractors, licensors, licensees, suppliers, or successors be liable under these terms of service to you or any third-party for any consequential, indirect, incidental, exemplary, special, or punitive damages whatsoever, including any damages for business interruption, loss of use, data, revenue or profit, cost of capital, loss of business opportunity, loss of goodwill, whether arising out of breach of contract, tort (including negligence), any other theory of liability, or otherwise, regardless of whether such damages were foreseeable and whether or not the company was advised of the possibility of such damages.
To the maximum extent permitted by applicable law, you agree to defend, indemnify, and hold harmless Company, its parent, subsidiaries, affiliates, and their respective directors, officers, employees, agents, service providers, contractors, licensors, suppliers, successors, and assigns from and against any claims, liabilities, damages, judgments, awards, losses, costs, expenses, or fees (including reasonable attorneys' fees) arising out of or relating to your breach of these Terms of Service or your use of the Website including, but not limited to, third-party sites and content, any use of the Website's content and services other than as expressly authorized in these Terms of Service or any use of any goods, digital products and information purchased from this Website.
At Company’s sole discretion, it may require you to submit any disputes arising from these Terms of Service or use of the Website, including disputes arising from or concerning their interpretation, violation, invalidity, non-performance, or termination, to final and binding arbitration under the Rules of Arbitration of the American Arbitration Association applying Ontario law. (If multiple jurisdictions, under applicable laws).
Any cause of action or claim you may have arising out of or relating to these terms of use or the website must be commenced within 1 year(s) after the cause of action accrues; otherwise, such cause of action or claim is permanently barred.
Your provision of personal information through the Website is governed by our privacy policy located at the "Privacy Policy".
The Website and these Terms of Service will be governed by and construed in accordance with the laws of the Province of Ontario and any applicable federal laws applicable therein, without giving effect to any choice or conflict of law provision, principle, or rule and notwithstanding your domicile, residence, or physical location. Any action or proceeding arising out of or relating to this Website and/or under these Terms of Service will be instituted in the courts of the Province of Ontario, and each party irrevocably submits to the exclusive jurisdiction of such courts in any such action or proceeding. You waive any and all objections to the exercise of jurisdiction over you by such courts and to the venue of such courts.
If you are a citizen of any European Union country or Switzerland, Norway or Iceland, the governing law and forum shall be the laws and courts of your usual place of residence.
The parties agree that the United Nations Convention on Contracts for the International Sale of Goods will not govern these Terms of Service or the rights and obligations of the parties under these Terms of Service.
If any provision of these Terms of Service is illegal or unenforceable under applicable law, the remainder of the provision will be amended to achieve as closely as possible the effect of the original term and all other provisions of these Terms of Service will continue in full force and effect.
These Terms of Service constitute the entire and only Terms of Service between the parties in relation to its subject matter and replaces and extinguishes all prior or simultaneous Terms of Services, undertakings, arrangements, understandings or statements of any nature made by the parties or any of them whether oral or written (and, if written, whether or not in draft form) with respect to such subject matter. Each of the parties acknowledges that they are not relying on any statements, warranties or representations given or made by any of them in relation to the subject matter of these Terms of Service, save those expressly set out in these Terms of Service, and that they shall have no rights or remedies with respect to such subject matter otherwise than under these Terms of Service save to the extent that they arise out of the fraud or fraudulent misrepresentation of another party. No variation of these Terms of Service shall be effective unless it is in writing and signed by or on behalf of Company.
No failure to exercise, and no delay in exercising, on the part of either party, any right or any power hereunder shall operate as a waiver thereof, nor shall any single or partial exercise of any right or power hereunder preclude further exercise of that or any other right hereunder.
We may provide any notice to you under these Terms of Service by: (i) sending a message to the email address you provide to us and consent to us using; or (ii) by posting to the Website. Notices sent by email will be effective when we send the email and notices we provide by posting will be effective upon posting. It is your responsibility to keep your email address current.
To give us notice under these Terms of Service, you must contact us as follows: (i) by personal delivery, overnight courier or registered or certified mail to Scry Analytics Inc. 2635 North 1st Street, Suite 200 San Jose, CA 95134, USA. We may update the address for notices to us by posting a notice on this Website. Notices provided by personal delivery will be effective immediately once personally received by an authorized representative of Company. Notices provided by overnight courier or registered or certified mail will be effective once received and where confirmation has been provided to evidence the receipt of the notice.
To request a copy for your information, unsubscribe from our email list, request for your data to be deleted, or ask a question about your data privacy, we've made the process simple: