This notice has expired. Check the NIH Guide for active opportunities and notices.

EXPIRED

Notice of Special Interest (NOSI): Administrative Supplements for Advancing the Ethical Development and Use of AI/ML in Biomedical and Behavioral Sciences
Notice Number:
NOT-OD-22-065

Key Dates

Release Date:

February 4, 2022

First Available Due Date:
March 31, 2022
Expiration Date:
April 01, 2022

Related Announcements

PA-20-272 - Administrative Supplements to Existing NIH Grants and Cooperative Agreements (Parent Administrative Supplement Clinical Trial Optional)

Issued by

Office of The Director, National Institutes of Health (OD)

National Eye Institute (NEI)

National Heart, Lung, and Blood Institute (NHLBI)

National Human Genome Research Institute (NHGRI)

National Institute on Aging (NIA)

National Institute of Allergy and Infectious Diseases (NIAID)

National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS)

National Institute of Biomedical Imaging and Bioengineering (NIBIB)

Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD)

National Institute on Deafness and Other Communication Disorders (NIDCD)

National Institute of Dental and Craniofacial Research (NIDCR)

National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK)

National Institute on Drug Abuse (NIDA)

National Institute of Environmental Health Sciences (NIEHS)

National Institute of General Medical Sciences (NIGMS)

National Institute of Mental Health (NIMH)

National Institute of Neurological Disorders and Stroke (NINDS)

National Institute of Nursing Research (NINR)

National Institute on Minority Health and Health Disparities (NIMHD)

National Library of Medicine (NLM)

Fogarty International Center (FIC)

National Center for Complementary and Integrative Health (NCCIH)

National Center for Advancing Translational Sciences (NCATS)

National Cancer Institute (NCI)

Purpose

The NIH Office of Data Science Strategy (ODSS) announces the availability of funds for Administrative Supplements to active NIH grants that have a significant Artificial Intelligence and Machine Learning (AI/ML) and/or ethics component. The funds will support collaborations that bring together expertise in ethics, biomedicine, data collection, and AI/ML to advance the understanding, tools, metrics, and practices for the ethical development and use of AI/ML in biomedical and behavioral sciences. This initiative is aligned with the NIH Strategic Plan for Data Science, which describes actions aimed at modernizing the biomedical research data ecosystem. For the purposes of this Notice, AI/ML is inclusive of machine learning (ML), deep learning (DL), and neural network (NN) techniques employed in distributed, federated, local, or cloud environments.

Applicants are strongly encouraged to discuss potential requests with their Institute/Center (IC) Program Official before submitting the supplemental request.

Background

Artificial intelligence and machine learning (AI/ML) are a collection of data-driven technologies with the potential to significantly advance scientific discovery in biomedical and behavioral research. In alignment with the NIH goal “to exemplify and promote the highest level of scientific integrity, public accountability, and social responsibility in the conduct of science,” the development and uses of AI/ML algorithms and systems in biomedical and behavioral research should be guided by a concern for human and clinical impacts. Therefore, researchers employing these technologies must take steps to minimize the harms that could result from their research, including but not limited to addressing (1) biases in datasets, algorithms, and applications; (2) issues related to identifiability and privacy; (3) impacts on disadvantaged or marginalized groups; (4) health disparities; and (5) unintended, adverse social, individual, and community consequences of research and development.

Some of the inherent characteristics of AI/ML, as well as its relative newness in the biomedical and behavioral sciences, have made it difficult for researchers to apply ethical principles in the development and use of AI/ML, particularly for basic research. For example, the “black box” nature of the decision-making process for AI/ML models can make it difficult to detect and measure potential biases or unintended consequences of the algorithms or systems.

The task of assembling datasets for AI/ML applications may lead researchers to engage a variety of stakeholders and potential sources, each requiring ethical consideration. While there are many longstanding ethical principles guiding the engagement of research subjects and collection of data, AI/ML applications can amplify underlying biases or expose research subjects to novel privacy risks in ways that are difficult to predict. As such, research participant understanding of AI/ML technologies, including the potential benefits to research as well as the potential risks to privacy, present new challenges for informed consent for use of data. Increasingly, AI/ML technologies are capable of unlocking discoveries about human health and behavior from data collected outside of research or clinical contexts, disconnected from the people from whom the data derives.

Yet another challenge for researchers is the transfer of information about AI/ML technologies that are employed throughout the research cycle. A hallmark of AI/ML (and data science employed for research more generally) is the sharing and reuse of research products including data for consumption by AI/ML models, models leveraging AI/ML techniques, and tools designed to facilitate AI/ML, such as pipelines. To promote the effective, efficient, and ethical reuse of these research products, it is critical that researchers transfer knowledge about them through metadata, documentation, or provenance.

It is the NIH vision, as outlined in the NIH Strategic Plan for Data Science, to establish a modernized and integrated biomedical data ecosystem that ethically adopts and employs the latest data science technologies, including AI/ML, throughout all stages of the research life cycle.

Research Objective

The goal of this Notice of Special Interest (NOSI) is to support new collaborations to advance the ethical development and use of AI/ML in biomedical and behavioral research. Significant expertise in AI/ML and ethics are expected to be needed to identify and address concerns about the human and clinical impacts of AI/ML technologies in biomedicine. Ethical expertise may be derived from a range of related disciplines such as philosophy, socio-economic research, history, legal scholarship, ethics in clinical research, and behavioral psychology. Experts in AI/ML may have backgrounds in computer science, data science, or informatics. NIH seeks to encourage participation by early-career ethicists through this opportunity; thus, proposals engaging collaborators at the postdoctoral level to provide ethical expertise will be considered appropriate.

These funded collaborations are intended to generate new understanding, practices, tools, techniques, metrics, or resources that will aid others in making ethical decisions throughout the development and use of AI/ML, which includes the collection and generation of data as well as the reuse of data and models by others. NIH expects that the research products developed under this NOSI, including best practices, datasets, models, and research findings, will be shared and made broadly reusable.

These supplements may be used to support a variety of activities including, but not limited to, the following:

    • Developing and evaluating best practices for documenting and communicating relevant information about AI/ML shared research products, including data and models, to promote their ethical reuse.
      • Re-users of data and models spend considerable time “getting to know” the data or the model, which often lack detailed disclosures on their intended use, performance characteristics, or potential ethical pitfalls. Similarly, there are few tools and frameworks to help instill understanding of the ethical reuse of data or models. Supported activities could include, for example, designing or comparing designs for model cards or data sheets to help users evaluate the ethical reuse of an AI/ML-relevant research product and take steps to ensure fair and inclusive AI/ML research outcomes.
    • Developing and evaluating measures and metrics for ethical, legal, and social impacts of AI/ML in biomedical and behavioral research.
      • There is a need to compare the ethical, legal, and social implications of data and AI/ML models in quantitative and reproducible ways. Funded activities could include, for example, developing measurements for bias or predictive metrics that capture the potential impacts of data or models in AI/ML; developing metrics for assessing AI/ML models with respect to socially sensitive issues; or undertaking comparative studies to evaluate existing metric performance when applied in different biomedical domains.
    • Developing generalizable methods of exploring and addressing ethical impacts throughout the AI/ML research cycle.
      • Biases in data and models are often discovered post hoc and sometimes after causing real-world harm. There is a need for frameworks and tools to guide the ethical development of AI/ML throughout the research lifecycle.
    • Developing best practices and guidelines for ethically using personal information as well as de-identified human data that is publicly available and consented for broad use, such as those from social media platforms, for biomedical or behavioral research.
      • From a legal and policy perspective, the allowable uses of these data are very broad. Funded research activities could include examining what ethical limits should be placed on using broad consent data from the public domain; or how the perception of the risks and potential research benefits align or diverge from measurable risks and benefits; or developing methods to systematically identify who is benefiting and negatively benefiting from data collection and re-use.

Eligibility

A broad range of projects that have significant AI/ML or ethics components are eligible, regardless of the scientific area of emphasis. The parent award must have a substantial focus on AI/ML and/or ethics.

Proposed projects must be within the scope of the parent award.

Awardees should be willing to participate in virtual meetings organized by NIH.

Applications that are not appropriate and out of scope for this NOSI include:

    • Projects with no engagement with the ethics community or that do not provide ethics expertise through the proposed collaboration.
    • Projects with no engagement with the AI/ML community or that do not provide AI/ML expertise through the proposed collaboration.
    • Proposals that are primarily focused on ethical issues that are specific to the parent award that do not explain how the insights or outputs of the proposed work could be used by others or generalized to other areas of research.
    • Proposals focused on influencing regulation of AI or policies related to the use of AI models in biomedical or clinical contexts.
    • Proposals focused on cybersecurity or other technologies used to meet existing privacy guidelines that do not also focus on new ethical questions or approaches to ethics in new research contexts.
    • Proposals that do not explicitly meet all the requirements stated elsewhere in this NOSI.
    • Proposals that are out of scope of the parent award.

Application and Submission Information

Applications submitted in response to this NOSI are strongly encouraged to include the following information:

    • A brief description of the AI/ML and/or ethics focus of the parent award.
    • A brief description of the relevant ethics and AI/ML expertise of the proposed collaboration.
    • A description of the scope of research proposed, including proposed methods and expected outcomes.
    • A description of the ethical issues to be addressed and how these relate to AI/ML in biomedical or behavioral research.
    • A statement about the potential broad impact of the proposed work to advance the understanding, tools, metrics, or practices for the ethical development and use of AI/ML in biomedical and behavioral sciences. For example, a description of how the generated insights and outputs might be used by others to improve the ethical development, use, or reuse of NIH-supported AI-research products.
    • A description of how the insights and products of activities funded under this NOSI will be shared, preferably in an appropriate community or public access repository. Examples of research products include journal or data publications, workflows, guides, teaching tools, or data or software documentation.
    • A proposed timeline of activities and milestones for the 12-month supplementary funding period.

Budget

To be eligible, the parent award must be able to receive funds in FY2022 (Oct. 1, 2021 - Sept. 30, 2022) and not be in the final year or in a no-cost extension period at the time of the award. The parent award must end on or after Sept. 30, 2023.

One-time supplement budget requests cannot exceed $200,000 direct costs. The number of awards will be contingent on availability of funds and receipt of meritorious applications. It is currently anticipated that 30 awards will be made.

Eligible Activity Codes:

Additional funds may be awarded as supplements to parent awards using any Activity Code that is listed in PA-20-272 with the following exceptions.

Small business activity codes (such as R41, R42, R43, R44, U44, and Fast Track) are excluded, as well as G20, PS1, P60, R13, U13, U42, and UG1 awards.

Note that not all participating NIH Institutes and Centers (ICs) support all the activity codes that may otherwise be allowed. Applicants are therefore strongly encouraged to consult the program officer of the parent grant to confirm eligibility.

Centers and multi-project grant mechanisms are eligible but must provide a strong justification for why existing funds cannot be reallocated toward the proposed project.

For awards that are already primarily funded to explore the intersection of ML/AL and ethical issues, applicants should provide strong justification for why additional funds are needed to support collaboration with ethics or ML/AI experts given that these activities could have been supported through the parent award.

Additional Information

Applications for this initiative must be submitted using PA-20-272 - Administrative Supplements to Existing NIH Grants and Cooperative Agreements (Parent Admin Supp Clinical Trial Optional) or its subsequent reissued equivalent.

All instructions in the SF424 (R&R) Application Guide and PA-20-272 must be followed, with the following additions:

  • Application Due Date(s) –Thursday March 31st, 2022 by 5:00 PM local time of applicant organization.
  • For funding consideration, applicants must include "NOT-OD-22-065” (without quotation marks) in the Agency Routing Identifier field (box 4B) of the SF424 R&R form. Applications without this information in box 4B will not be considered- for this initiative.
  • Requests may be for one year of support only.
  • The Project Summary should briefly summarize the parent grant and describe the goals of the supplement project.
  • The Research Strategy section of the application should be limited to 5 pages.

Administrative Evaluation Process

Submitted applications must follow the guidelines of the IC that funds the parent grant. Administrative Supplements do not receive peer review. Each IC will conduct administrative reviews of applications submitted to their IC separately. The most meritorious applications will be evaluated by a trans-NIH panel of NIH staff and supported based upon availability of funds. The criteria described below will be considered in the administrative evaluation process:

  1. What is the potential impact of the proposed work on the NIH mission?
  2. What is the potential of the proposed work to advance the understanding, tools, metrics, or practices for the ethical development and use or reuse of AI/ML research products for biomedical and behavioral research?
  3. Technical merit: To what extent are the proposed methods and techniques appropriate and likely to produce the desired or expected outcomes for the proposed research?
  4. Does the proposed project involve an interdisciplinary collaboration between ethics and AI/ML experts that is likely to successfully address the described need? Why or why not?
  5. Are the proposed budget and staffing levels adequate to carry out the proposed work? Is the budget reasonable and appropriate for the proposed scope of work?
  6. Is the proposed project technically feasible within the supplement's funding period? Are the proposed timelines and milestones adequate and realistic within a year? Why or why not?

Other Information:

It is strongly recommended that the applicants contact their respective program officers at the Institute supporting the parent award in advance to:

  • Confirm that the supplement falls within the scope of the parent award;
  • Request the requirements of the IC for submitting applications for administrative supplements.

Investigators planning to submit an application in response to this NOSI are also strongly encouraged to contact and discuss their proposed research/aims with the scientific contact listed on this NOSI in advance of the application receipt date.

Following submission, applicants are strongly encouraged to notify the program contact at the IC supporting the parent award that a request has been submitted in response to this NOSI in order to facilitate efficient processing of the request.

Within the focus of this funding announcement, additional co-funding may be available from the Office of Research on Women’s Health (ORWH) to support research that uses AI/ML approaches to address sex, gender, and scientific ethics-related inquiries. Submissions that engage early-career investigators are encouraged.

Inquiries

Please direct all inquiries to:

Laura Biven PhD
Office of Data Science Strategy
Division of Program Coordination, Planning, and Strategic Initiatives
Office of the Director
Email: [email protected]