Department of Health and Human Services

Part 1. Overview Information

Participating Organization(s)

National Institutes of Health (NIH)

Components of Participating Organizations

National Institute of Mental Health (NIMH)

Funding Opportunity Title
Pilot Effectiveness Trials for Treatment, Preventive and Services Interventions (R34 Clinical Trial Required)
Activity Code

R34 Planning Grant

Announcement Type
Reissue of RFA-MH-18-706
Related Notices

See Notices of Special Interest associated with this funding opportunity

None

Funding Opportunity Announcement (FOA) Number
PAR-21-131
Companion Funding Opportunity
PAR-21-129 , R01 Research Project
PAR-21-130 , R01 Research Project
PAR-21-132 , R01 Research Project
PAR-21-133 , U01 Research Project (Cooperative Agreements)
PAR-21-134 , R33 Exploratory/Developmental Grants Phase II
PAR-21-135 , R61/ R33 Phase 1 Exploratory/Developmental Grant/ Exploratory/Developmental Grants Phase II
PAR-21-136 , R33 Exploratory/Developmental Grants Phase II
PAR-21-137 , R61/ R33 Phase 1 Exploratory/Developmental Grant/ Exploratory/Developmental Grants Phase II
Assistance Listing Number(s)
93.242
Funding Opportunity Purpose

NIMH solicits clinical trial applications through a series of Funding Opportunity Announcements (FOAs) that cover the intervention development pipeline, from first-inhuman, early testing of new interventions, confirmatory efficacy trials, through to effectiveness trials. The purpose of this FOA is to encourage pilot research consistent with NIMH's priorities for: 1) effectiveness research on preventive and therapeutic interventions with previously demonstrated efficacy, for use with broader target populations or for use in community practice settings, and 2) research on the development and preliminary testing of innovative services interventions.

Consistent with the NIMH experimental therapeutics approach, this FOA is intended to support pilot studies of intervention effectiveness or service delivery approaches that explicitly address whether the intervention engages the target(s)/mechanism(s) presumed to underlie the intervention effects (i.e., the mechanism(s) that accounts for changes in clinical/functional outcomes, changes in provider behavior, improved access or continuity of services, etc.).  In this pilot effectiveness phase of research, NIMH places highest priority on intervention and service delivery approaches that can be justified in terms of their potential to substantially impact practice and public health.

This FOA supports pilot studies and provides resources for evaluating the feasibility, tolerability, acceptability and safety and preliminary effectiveness of approaches to improve mental health/functional outcomes, to modify risk factors, or to improve service delivery, and for obtaining the preliminary data needed as a pre-requisite to a larger-scale effectiveness trial (e.g., comparative effectiveness study, pragmatic trial).  Support for fully-powered effectiveness studies is provided through separate FOAs that utilize the R01 mechanism for single-site effectiveness trials (PAR-21-130; "Clinical Trials to Test the Effectiveness of Treatment, Preventive, and Services Interventions (R01).") and collaborative R01 mechanism for multi-site effectiveness trials (PAR-21-129;"Clinical Trials to Test the Effectiveness of Treatment, Prevention, and Services Interventions (Collaborative R01 Clinical Trial Required)"). Applicants pursuing other stages of the clinical trial pipeline should consider one of the companion FOAs listed above.

Key Dates

Posted Date
March 02, 2021
Open Date (Earliest Submission Date)
May 15, 2021
Letter of Intent Due Date(s)

30 days prior to the application due date

Application Due Dates Review and Award Cycles
New Renewal / Resubmission / Revision (as allowed) AIDS Scientific Merit Review Advisory Council Review Earliest Start Date
June 15, 2021 June 15, 2021 Not Applicable October 2021 January 2022 March 2022
October 15, 2021 October 15, 2021 Not Applicable February 2022 May 2022 July 2022
February 15, 2022 February 15, 2022 Not Applicable June 2022 October 2022 December 2022
June 15, 2022 June 15, 2022 Not Applicable October 2022 January 2023 March 2023
October 14, 2022 October 14, 2022 Not Applicable February 2023 May 2023 July 2023
February 15, 2023 February 15, 2023 Not Applicable June 2023 October 2023 December 2023
June 15, 2023 June 15, 2023 Not Applicable October 2023 January 2024 March 2024
October 17, 2023 October 17, 2023 Not Applicable February 2024 May 2024 July 2024
February 15, 2024 February 15, 2024 Not Applicable June 2024 October 2024 December 2024

All applications are due by 5:00 PM local time of applicant organization. All types of non-AIDS applications allowed for this funding opportunity announcement are due on the listed date(s).

Applicants are encouraged to apply early to allow adequate time to make any corrections to errors found in the application during the submission process by the due date.

Expiration Date
February 16, 2024
Due Dates for E.O. 12372

Not Applicable

Required Application Instructions

It is critical that applicants follow the instructions in the Research (R) Instructions in the SF424 (R&R) Application Guide, except where instructed to do otherwise (in this FOA or in a Notice from NIH Guide for Grants and Contracts).

Conformance to all requirements (both in the Application Guide and the FOA) is required and strictly enforced. Applicants must read and follow all application instructions in the Application Guide as well as any program-specific instructions noted in Section IV. When the program-specific instructions deviate from those in the Application Guide, follow the program-specific instructions.

Applications that do not comply with these instructions may be delayed or not accepted for review.

There are several options available to submit your application through Grants.gov to NIH and Department of Health and Human Services partners. You must use one of these submission options to access the application forms for this opportunity.

  1. Use the NIH ASSIST system to prepare, submit and track your application online.
  2. Use an institutional system-to-system (S2S) solution to prepare and submit your application to Grants.gov and eRA Commons to track your application. Check with your institutional officials regarding availability.

  3. Use Grants.gov Workspace to prepare and submit your application and eRA Commons to track your application.


  4. Table of Contents

Part 2. Full Text of Announcement

Section I. Funding Opportunity Description

Purpose

The purpose of this Funding Opportunity Announcement (FOA) is to encourage pilot research consistent with NIMH's priorities for: 1) effectiveness research on preventive and therapeutic interventions with previously demonstrated efficacy, for use with broader target populations or for use in community practice settings, and 2) research on the development and preliminary testing of innovative services interventions. Applications should provide resources for evaluating the feasibility, tolerability, acceptability, safety and preliminary effectiveness of approaches to improving mental health or functional outcomes, or modifying risk factors, and for obtaining the preliminary data needed as a pre-requisite to a larger-scale effectiveness trial (e.g., comparative effectiveness trial, pragmatic trial). In this pilot phase of effectiveness research, NIMH places the highest priority on approaches that can be justified in terms of their potential to substantially impact practice and public health (i.e., in terms of the magnitude of likely improvements in clinical benefit, safety/tolerability profile, value and efficiency, or scalability potential, as compared to existing approaches) and are empirically grounded. 

Adaptations or augmentations of efficacious preventive, therapeutic, or services interventions should only be undertaken if there is (a) an empirical rationale for the adaptation/augmentation target (i.e., a clear association of the adaptation/augmentation with non-response, partial response, patient non-engagement, or relapse), (b) a clear hypothesis and plan to address the mechanism by which the adapted intervention or augmentation will enhance outcomes, and (c) evidence to suggest that the adapted intervention will result in a substantial improvement in response rate, speed of response, an aspect of care, or uptake in community/practice settings.

Consistent with the NIMH experimental therapeutics approach, this FOA is intended to support pilot tests of intervention effectiveness or service delivery approaches that explicitly address whether the intervention engages the target(s)/mechanism(s) presumed to underlie the intervention effects (i.e., the mechanism that accounts for changes in clinical/functional outcomes, changes in provider behavior, improved access or continuity of services, etc.). The goal is to re-confirm whether the intervention targets and the associated change mechanisms previously identified under more controlled efficacy conditions are operative in the effectiveness context. In this manner, the results of the pilot effectiveness trial will advance knowledge regarding therapeutic change mechanisms and inform decisions about whether further effectiveness testing is warranted (see Support for Clinical Trials at NIMH).

Intervention "targets" and change mechanisms will vary depending on the nature of the intervention, but in all cases should be empirically justified. For preventive and therapeutic interventions, targets include factors that have been empirically associated with risk for or the etiology, maintenance, severity/course of the disorder or condition of interest. These targets might involve specific psychological, behavioral processes, or interpersonal processes (e.g., cognitive-affective processes such as emotion regulation, attention bias, cognitive control, stress regulation) or neurobiological entities (e.g., brain circuits).

This FOA is also intended to support research on the development and preliminary testing of innovative services interventions. Targets/mechanisms for such services interventions might involve mutable consumer- or provider- behaviors, or organizational-/system- level factors that are intervened upon in order to improve access, continuity, quality, equity, and/or value of services.   

Valid and reliable measures of change in the hypothesized target(s)/mechanism(s) will provide useful information about key change mechanisms that account for intervention effects. In the assessment of target engagement, NIMH encourages the use of measures that are as direct and objective as is feasible in the effectiveness setting. Specifically encouraged are empirically validated measures of the construct that extend beyond self-reports and other subjective measures, where possible, and inclusion of measures that span more than one level of assessment if possible and appropriate. 

This R34 FOA supports pilot effectiveness studies focused on refining and optimizing preventive and therapeutic interventions with previously demonstrated efficacy for use with broader target populations or for use in community practice settings in anticipation of fully-powered trials (e.g., practical trials). With appropriate justification, effectiveness testing of preventive and therapeutic interventions might be warranted in the absence of extensive efficacy data (e.g., when the intervention is primarily comprised of research informed strategies, but the specific strategies that haven't been extensively tested in combination or with a specific target population; when there is strong pilot data and the goal is to conduct further testing in a deployment-focused manner to expedite the translation into practice). To reduce the alarming fall-off in effect sizes from efficacy to effectiveness studies, and to expedite translation from intervention development to practice-ready interventions, this FOA can be used to conduct pilot trials in community settings as early as possible following a demonstration of efficacy. Researchers interested in novel intervention development that is explicitly focused on the initial translation of neurobiological, basic cognitive and behavioral science findings into novel interventions are referred to the webpage on Support for Clinical Trials at NIMH.

To facilitate the translations into practice:

  • This FOA is intended to support research that reflects a deployment-focused model of intervention and services design and testing that considers the perspective of key stakeholders (e.g., service users, providers, administrators, payers) and the characteristics of the settings (e.g., resources, including workforce capacity; existing clinical workflows) where optimized mental health interventions and services are intended to be implemented. This attention to end-user perspectives and characteristics of intended clinical and/or community practice settings is intended to ensure that the resultant interventions and service delivery strategies are feasible and scalable, and to ensure that the research results will have utility for end users.
  • NIMH encourages projects testing the effectiveness of preventive, therapeutic, or services interventions that are designed as hybrid effectiveness-implementation trials, as appropriate, depending on the level of pre-existing effectiveness evidence and implementation readiness. Thus, in addition to testing the effectiveness of a preventive or therapeutic intervention, NIMH encourages effectiveness trials that are designed to assess and examine consumer-, provider- and setting- level factors that might be associated with implementation fidelity (i.e., as Hybrid Type-I trials) or to simultaneously test strategies to promote successful implementation (i.e., as Hybrid Type II trials). Likewise, studies that are primarily aimed at testing an implementation or dissemination strategy should be designed to also assess the outcomes and effectiveness of the intervention/approach that is being implemented, as appropriate and feasible (i.e., as Hybrid Type III trials).
  • NIMH encourages the development and testing of intervention and service delivery strategies that incorporate features that are specifically designed to prevent threats to implementation fidelity. Strategies that might be used to enhance scalability and sustained implementation include but are not limited to: consumer-facing technology (e.g., self-administered content) and provider-facing technology (e.g., technology to support provider training and sustained implementation fidelity); expert consultation via existing resources or other sustainable means (e.g., telehealth, collaborative care approaches); or other robust design features that promote provider competence and sustained implementation fidelity.

Effective prevention and treatment of mental illness have the potential to reduce morbidity and mortality associated with intentional injury (i.e., suicide attempts and deaths, see: www.suicide-research-agenda.org). Lack of attention to the assessment of these outcomes has limited our understanding regarding the degree to which effective mental health interventions might offer prophylaxis. Accordingly, where feasible and appropriate, NIMH encourages effectiveness research that includes assessment of suicidal behavior in clinical trials in response to this FOA using strategies that can facilitate integration and sharing of data (e.g., see NOT-MH-15-009 and https://www.phenxtoolkit.org/ for example constructs and corresponding assessment strategies).

Potential applicants are strongly encouraged to contact Scientific/Research contacts as far in advance as possible to discuss the potential clinical practice/public health impact of the proposed pilot investigation, as well as concordance with current NIMH priorities.

Information about the mission, Strategic Plan, and research interests of the NIMH can be found on the NIMH website. Applicants are also strongly encouraged to review the information on Support for Clinical Trials at NIMH and the NIMH webpage on clinical research.

Scope of Pilot Research

Pilot effectiveness studies in response to this announcement should propose the developmental work that would justify and inform the design of a subsequent randomized controlled trial (RCT) or a highly rigorous trial in community treatment settings where randomization is not possible. Pilot trials should be explicitly designed to enhance the probability of obtaining meaningful results in subsequent well-powered trials by including measures of action (i.e., assessment of presumed mechanisms, such as specific neurobiological entities (e.g., brain circuits) or psychological or behavioral processes (e.g., attention bias, cognitive control, stress regulation)) that are presumed to underlie the intervention effects and account for changes in clinical/functional outcomes, changes in provider behavior, systems level improvements, etc., as appropriate and feasible in the effectiveness context. For studies that involve preventive or therapeutic interventions, as appropriate, the study should take into account Research Domain Criteria (RDoC) or RDoC-like constructs when defining the subject eligibility (inclusion), intervention targets or mechanisms, and outcomes (see the RDoC webpage for more details). In this manner, pilot study results – whether positive or negative – will provide information of high utility to the field by informing decisions about whether further testing is warranted and by advancing knowledge regarding therapeutic change mechanisms.

This pilot FOA also affords an opportunity to refine and pilot test the experimental protocols, including the assessment protocol, the experimental intervention protocol (e.g., the manual for a psychosocial intervention, the dosing schedule for a psychosocial or pharmacological approach), and the comparison intervention protocol and randomization procedures (if appropriate); to examine the feasibility of recruiting and retaining participants into the study conditions (including the experimental condition and the comparison condition, if relevant); and to explore the feasibility of delivering the intervention with the target population (e.g., a case series in a community practice setting). 

Accordingly, the collection of preliminary data regarding feasibility, acceptability, safety, tolerability, and target outcomes is appropriate. Given the intended pilot nature of the R34 mechanism, conducting formal tests of clinical outcomes or attempting to obtain an estimate of an effect size is often not justified. Given the limited sample sizes typically supportable under this pilot study mechanism, the variability in the effect sizes obtained is often so large as to be unreliable. Thus, using these potentially unstable effect size estimates in power calculations for larger studies, without regard to clinical meaningfulness, is not advisable.

Pilot Effectiveness Studies

NIMH supports intervention research to examine effectiveness (i.e., the utility of research-based approaches in community practice settings) and research to optimize, sequence, and personalize intervention approaches for improved response rates, more complete and rapid remission, improved functional outcomes,  more efficient clinical practice or improved organizational or system functioning. Such studies might focus on empirically justified new, adapted, or augmented interventions designed to significantly improve outcomes or facilitate their uptake; strategies that employ predictors or moderators of treatment response (e.g., clinical/socio-demographic information, measures of neuropsychological functioning, biomarkers, surrogate markers of early response) to construct and test algorithms for more personalized treatment selection, sequencing, or step-wise approaches to care; and more efficient, scalable strategies for implementing interventions (e.g., technology-assisted approaches) that can improve the value and facilitate the dissemination and implementation of evidence-based care.  

The primary goal of effectiveness research and other public health-oriented practical trials is to determine whether interventions with demonstrated efficacy under tightly controlled conditions can have a measurable beneficial effect when implemented in less precisely controlled circumstances with more heterogeneous populations, providers and settings. Typically, these studies have broader inclusion (i.e., relatively few exclusion criteria) and outcomes, including functioning (school, work, interpersonal, etc.), quality of life, mortality, institutionalization, and health care resource use, in addition to measures of psychiatric morbidity, and are principally aimed at informing practice and programmatic decision making. 

Pilot effectiveness research can be considered to involve a series of steps including: (step 1) an initial step where characteristics of typical patients, providers, and the community/practice settings to which the intervention will be transported are systematically studied; (step 2) a step in which empirically suggested adaptations to improve the fit of the intervention are operationalized and incorporated in intervention manuals and materials; and (step 3) a step in which the intervention is pilot tested. 

Step 1. A systematic assessment of the degree of fit between the un-adapted intervention and the target population/provider/setting typically precedes the refinement of interventions with demonstrated efficacy for use with new target populations or for use in community practice settings. At this step, relevant characteristics of the target population/provider/setting are specified and needed refinements to the intervention (e.g., adaptation to intervention content or format) are identified. In some cases, the information regarding the characteristics of the target population or setting that have implications for informing the adaptation may already be known from previous research, and additional step 1 activities may not be needed prior to steps 2 and 3, below. Regardless of whether the information to inform the intervention adaptation is based on extant studies or based on step 1 activities, as described above, the justification for a refined or adapted intervention should be based on data describing characteristics of client subgroups, settings, care providers, or other relevant variables that are associated with non-response, partial response, relapse, or poor uptake. 

Adaptation or augmentations of efficacious interventions should only be undertaken if there is a compelling rationale, supported by empirical evidence, that justifies the adaptation in terms of: (a) theoretical and empirical support for the adaptation target (e.g., a prognostic variable such as a neurocognitive functioning variable, that has been associated with non-response, partial response, patient non-engagement, or relapse); (b) a theoretical/empirical explanation of how the prognostic variable interferes with the response to the un-adapted intervention (i.e., a clear hypothesis regarding the mechanism by which a the prognostic variable moderates response to the un-adapted intervention or functions to disadvantage a subgroup);  and (c) evidence to suggest that the adapted intervention will result in a substantial improvement in response rate, speed of response, efficiency or other aspects of care, or uptake in community/practice settings when compared to the un-adapted interventions or existing intervention approaches (see the NAMHC Workgroup Report, "From Discovery to Cure: Accelerating the Development of New and Personalized Interventions for Mental Illnesses", see Recommendation 2.4.1, page 19, for additional guidance regarding the empirical justification for intervention adaptations and augmentations). 

Applications that propose adaptations or augmentations of currently available treatments for new subpopulations or settings without strong empirical justification will be considered low priority.

Step 2. Standardization of an intervention primarily involves further development or adaptation of the intervention protocol (e.g., psychotherapy manual, medication dosing schedule), including iterative refinements based on extant research and feedback from patients, care providers, managers, payers, and other important stakeholders, and other investigators. For preventive, therapeutic, and services interventions, a critical step involves iterative refinement (e.g., through pilot cases or a case series) to optimize the intervention dose (i.e., amount, timing, duration) and to establish the dosing that is sufficient to engage intervention targets and produce clinical benefit in the community practice context. NIMH encourages a deployment-focused approach in which end-user feedback is systematically incorporated to refine the intervention and research protocols.  Accordingly, this step might also involve refinements to assessment strategies (e.g., measures to determine inclusion/exclusion; measures and assessment schedules to assess change mechanisms as well as psychiatric and other outcomes, including functioning, and value and efficiency; and measures to assess acceptability and usability of the intervention protocol, fidelity of implementation, and client/patient engagement and adherence).

Step 3. The pilot effectiveness stage involves feasibility testing of the intervention protocol and assessment strategies developed in earlier stages of intervention development/refinement. Typically, data are gathered to examine the feasibility of: identifying, enrolling, and retaining participants; implementing the experimental intervention (and comparison condition, as relevant); and implementing the assessment protocols, including examining the feasibility of measures for assessing inclusion/exclusion, the fidelity of intervention delivery, mediators/moderators of response, and outcomes. 

While establishing feasibility and acceptability is a major goal of pilot effectiveness studies, under NIMH's experimental medicine based approach to intervention development and testing (see Support for Clinical Trials at NIMH), a focus on feasibility/acceptability is necessary but not sufficient: All trials should be designed to explicitly address whether the intervention engages the target(s)/mechanism(s) presumed to underlie the intervention effects (the target(s)/mechanism(s) that accounts for changes in clinical/functional outcomes, changes in provider behavior, etc.). Accordingly, the application/scope of work should address the following: 1) the empirical basis for the selection of the proximal targets (e.g., prior evidence linking the targets to the outcomes of interest; 2) plans for assessing whether the intervention leads to the hypothesized changes in proximal targets/mechanisms; and plans for a preliminary examination of whether intervention-induced changes in the targets/mechanisms are associated with changes in the outcomes of interest.   

Depending on the characteristics of the disorder or target population, the nature of the intervention, the nature of intended providers or settings, and the stage of the research program, pilot testing may take various forms. Pilot testing might involve a systematic study of an uncontrolled case series, a small randomized pilot trial to explore the feasibility of both the experimental intervention and the control condition that will be used in the subsequent definitive trial, or alternative designs (e.g., quasi-experimental designs, multiple-baseline, single-case designs). 

The boundaries between these three steps are not discrete. It is expected that most applications will propose research activities or methods that span all of the above-noted steps and all projects will include step 3 activities that will provide the prerequisite information needed for conducting a larger (e.g., R01) study.

Intervention research that addresses the objectives outlined in the NIMH Strategic Plan is strongly encouraged. Additional priorities for intervention research are detailed in the NAMHC Workgroup Report "From Discovery to Cure: Accelerating the Development of New and Personalized Interventions for Mental Illnesses." Updated priorities for NIMH clinical trials research are also detailed on the NIMH website focused on Support for Clinical Trials at NIMH

NIMH emphasizes the potential for impact on practice and public health (e.g., in terms of the magnitude of likely improvements in clinical benefit, safety/tolerability profile, value and efficiency or dissemination potential, as compared to existing approaches) in new/adapted interventions and in effectiveness studies and practical trials.   

Consistent with recommendations of the NAMHC Workgroup on interventions research (noted above), NIMH encourages clinical trials research that will foster personalized mental health care, including studies aimed at identifying predictor/moderator variables (e.g., clinical/socio-demographic information, biomarkers, surrogate markers of early response) that can be used to characterize patients more likely to benefit from a specific treatment and, when appropriate, encourages the use of more advanced research designs that can be used to examine prescriptive approaches for matching individuals to optimal care (e.g., adaptive designs, stepped-care approaches). Studies that involve randomization of large samples in pursuit of incremental gains in effect sizes, especially without attention to modifiers of response that have implications for personalizing care, will be considered lower priority. Accordingly, pilot studies in anticipation of trials to examine more personalized approaches are encouraged.

NIMH encourages pilot effectiveness studies that maximize efficiencies and examine the feasibility of utilizing existing infrastructure (e.g., practice-based research networks, electronic medical records, administrative databases, patient registries) to increase the efficiency of participant recruitment (e.g., more rapid identification and enrollment) and to facilitate the collection of moderator data (e.g., clinical characteristics, biomarkers), longer-term follow-up data, and broader, stakeholder-relevant outcomes (e.g., mental health and general health care utilization). 

Examples of possible NIMH-relevant effectiveness research questions include, but are not limited to:

  • Feasibility studies to explore the degree to which efficacious preventive, therapeutic, or services interventions generalize to typical community/practice settings and studies that aim to increase the clinical impact of interventions by optimizing and enhancing intervention effects beyond those observed in efficacy trials.
  • Studies to pilot test research instruments, data-management procedures, and training/supervision protocols in preparation for a larger scale effectiveness study or practical trial.
  • Studies to systematically adapt efficacious interventions to improve their utility for use in community/practice settings (e.g., in vivo intervention refinement to address common patterns of comorbidity) where evidence suggests adapted interventions could lead to a substantial improvement in fit and outcomes.
  • Studies to examine/adapt parameters of evidence-informed interventions (e.g., dose, duration, method of administration) that impact generalization of efficacious interventions to practice settings.
  • Research examining patient-, provider-, and organizational- level factors that impact the transportability of interventions (i.e., the degree to which the evidence-informed intervention can be implemented with fidelity, the degree to which effects observed in efficacy studies generalize).
  • Studies to adapt and pilot test interventions that were developed and tested in mental health specialty settings to determine capacity and fit for use in non-specialty community/practice settings (e.g., primary care, schools, criminal justice settings), e.g., as part of a stepped-care delivery system.
  • Pilot tests of personalized treatment algorithms to examine feasibility, patient satisfaction, and clinician acceptability, followed by effectiveness trials for definitive tests of personalized approaches.
  • Pilot studies to examine feasibility of modular or stepped-care approaches for matching treatment components to patient's areas of greatest need
  • Pilot trials to test or compare patient-, provider- or systems-level services interventions designed to promote service access, use, engagement, and retention or improve quality or outcomes of care, including approaches to reduce empirically documented outcome disparities experienced by people in specific subgroups
  • Studies to refine and pilot test diffusion strategies (i.e., dissemination and implementation) to improve adoption of evidence-based treatments in practice setting.
  • Pilot trials that test provider-, organizational-, or systems-level interventions to increase the uptake, fidelity, and sustained use of evidence-based MH interventions (e.g., studies comparing approaches to provider training and supervision).

Applications Not Responsive to this FOA

Studies that are not responsive to this FOA and will not be reviewed include the following:

  • Applications whose scope of work involves examining intervention effectiveness without studying whether the intervention engages the target(s) presumed to underlie benefits and without examining whether intervention-induced changes in targets are associated with clinical benefit.
  • Adaptations of existing interventions in the absence of a compelling justification and in the absence of a clear experimental therapeutics approach to examining how the intervention engages the adaptation target (see the NAMHC Workgroup Report, "From Discovery to Cure: Accelerating the Development of New and Personalized Interventions for Mental Illnesses", see Recommendation 2.4.1, page 19, for additional guidance regarding the empirical justification for intervention adaptations and augmentations.
  • Studies conducted in academic research laboratories as opposed to effectiveness studies in community practice clinics/settings (e.g., studies in research clinics that involve research therapists or other features that are not representative of typical practice settings and substantially impact generalizability).
  • Trials using patented medications that lack superior efficacy or safety relative to currently available off-patent medications.
  • Studies of stigma or health literacy interventions that do not explicitly study the impact on mental health service access, engagement, quality, and/or outcomes of care.

Scale and Scope of Studies Covered Under this Announcement

This FOA supports pilot effectiveness research to evaluate the feasibility, tolerability, acceptability, safety and preliminary indications of effectiveness of preventive, therapeutic, and services interventions; to examine whether the intervention engages the target/mechanism that is presumed to underlie the intervention effects; and, to obtain preliminary data needed as a pre-requisite to a larger-scale effectiveness trial (e.g., comparative effectiveness study, practical trial) designed to definitely test the effectiveness of interventions to improve post-acute outcomes. 

Support for fully-powered effectiveness studies is provided via separate FOAs, including a FOA for single R01 applications (PAR-21-130) "Clinical Trials to Test the Effectiveness of Treatment, Preventive, and Services Interventions (R01)") and a collaborate R01 FOA for multi-site trials (PAR-21-129) "Clinical Trials to Test the Effectiveness of Treatment, Preventive, and Services Interventions (Collaborative R01)"). 

Support for pilot studies leading to services research other than clinical trials (e.g., research to identify mutable factors that impact access, utilization, quality, outcomes or scalability of mental health services; development of new research tools, measures, or methods; or testing the feasibility of integrating existing data sets to understand factors affecting access, quality or outcomes of care) is provided through PAR-19-189 ("Pilot Services Research Grants Not Involving Clinical Trials (R34 Clinical Trial Not Allowed)").    

Investigators are strongly encouraged to visit the Support for Clinical Trials at NIMH webpage and consult with Scientific/Research Staff regarding grant activities that are appropriately matched to the stage of intervention research and study scope.

Applications with data collection plans that involve multiple respondent groups (e.g., clients/patients, therapists/providers, supervisors, administrators) should address provisions for human subject protections and consenting procedures for all participant groups, accordingly.

The NIMH has published updated policies and guidance for investigators regarding human research protection and clinical research data and safety monitoring (NOT-MH-19-027). The application’s PHS Human Subjects and Clinical Trials Information, including the Data and Safety Monitoring Plan, should reflect the policies and guidance in this notice. Plans for the protection of research participants and data and safety monitoring will be reviewed by the NIMH for consistency with NIMH and NIH policies and federal regulations.

See Section VIII. Other Information for award authorities and regulations.

Section II. Award Information

Funding Instrument

Grant: A support mechanism providing money, property, or both to an eligible entity to carry out an approved project or activity.

Application Types Allowed
New
Resubmission
Revision

Resubmission from RFA-MH-18-706 and PAR-21-131

Revision from RFA-MH-17-612RFA-MH-18-706, and PAR-21-131

The OER Glossary and the SF424 (R&R) Application Guide provide details on these application types. Only those application types listed here are allowed for this FOA.

Clinical Trial?

Required: Only accepting applications that propose clinical trial(s).

Need help determining whether you are doing a clinical trial?

Funds Available and Anticipated Number of Awards

NIMH intends to commit a total of $27 million for FY 2022 to fund this FOA and the companion FOAs listed in Part 1. Overview Information .

Award Budget

Direct costs are limited to $450,000 over the R34 project period, with no more than $225,000 in direct costs allowed in any single year.

Award Project Period

The total project period for an application submitted in response to this funding opportunity may not exceed three years.      

NIH grants policies as described in the NIH Grants Policy Statement will apply to the applications submitted and awards made from this FOA.

Section III. Eligibility Information

1. Eligible Applicants

Eligible Organizations

Higher Education Institutions

  • Public/State Controlled Institutions of Higher Education
  • Private Institutions of Higher Education

The following types of Higher Education Institutions are always encouraged to apply for NIH support as Public or Private Institutions of Higher Education:

  • Hispanic-serving Institutions
  • Historically Black Colleges and Universities (HBCUs)
  • Tribally Controlled Colleges and Universities (TCCUs)
  • Alaska Native and Native Hawaiian Serving Institutions
  • Asian American Native American Pacific Islander Serving Institutions (AANAPISIs)

Nonprofits Other Than Institutions of Higher Education

  • Nonprofits with 501(c)(3) IRS Status (Other than Institutions of Higher Education)
  • Nonprofits without 501(c)(3) IRS Status (Other than Institutions of Higher Education)

For-Profit Organizations

  • Small Businesses
  • For-Profit Organizations (Other than Small Businesses)

Local Governments

  • State Governments
  • County Governments
  • City or Township Governments
  • Special District Governments
  • Indian/Native American Tribal Governments (Federally Recognized)
  • Indian/Native American Tribal Governments (Other than Federally Recognized)

Federal Governments

  • Eligible Agencies of the Federal Government
  • U.S. Territory or Possession

Other

  • Independent School Districts
  • Public Housing Authorities/Indian Housing Authorities
  • Native American Tribal Organizations (other than Federally recognized tribal governments)
  • Faith-based or Community-based Organizations
  • Regional Organizations
  • Non-domestic (non-U.S.) Entities (Foreign Institutions)
Foreign Institutions

Non-domestic (non-U.S.) Entities (Foreign Institutions) are eligible to apply.

Non-domestic (non-U.S.) components of U.S. Organizations are eligible to apply.

Foreign components, as defined in the NIH Grants Policy Statement, are allowed. 

Required Registrations

Applicant organizations

Applicant organizations must complete and maintain the following registrations as described in the SF 424 (R&R) Application Guide to be eligible to apply for or receive an award. All registrations must be completed prior to the application being submitted. Registration can take 6 weeks or more, so applicants should begin the registration process as soon as possible. The NIH Policy on Late Submission of Grant Applications states that failure to complete registrations in advance of a due date is not a valid reason for a late submission.

  • Dun and Bradstreet Universal Numbering System (DUNS) - All registrations require that applicants be issued a DUNS number. After obtaining a DUNS number, applicants can begin both SAM and eRA Commons registrations. The same DUNS number must be used for all registrations, as well as on the grant application.
  • System for Award Management (SAM) – Applicants must complete and maintain an active registration, which requires renewal at least annually. The renewal process may require as much time as the initial registration. SAM registration includes the assignment of a Commercial and Government Entity (CAGE) Code for domestic organizations which have not already been assigned a CAGE Code.
  • eRA Commons - Applicants must have an active DUNS number to register in eRA Commons. Organizations can register with the eRA Commons as they are working through their SAM or Grants.gov registration, but all registrations must be in place by time of submission. eRA Commons requires organizations to identify at least one Signing Official (SO) and at least one Program Director/Principal Investigator (PD/PI) account in order to submit an application.
  • Grants.gov – Applicants must have an active DUNS number and SAM registration in order to complete the Grants.gov registration.

Program Directors/Principal Investigators (PD(s)/PI(s))

All PD(s)/PI(s) must have an eRA Commons account.  PD(s)/PI(s) should work with their organizational officials to either create a new account or to affiliate their existing account with the applicant organization in eRA Commons. If the PD/PI is also the organizational Signing Official, they must have two distinct eRA Commons accounts, one for each role. Obtaining an eRA Commons account can take up to 2 weeks.

Eligible Individuals (Program Director/Principal Investigator)

Any individual(s) with the skills, knowledge, and resources necessary to carry out the proposed research as the Program Director(s)/Principal Investigator(s) (PD(s)/PI(s)) is invited to work with his/her organization to develop an application for support. Individuals from underrepresented racial and ethnic groups as well as individuals with disabilities are always encouraged to apply for NIH support.

For institutions/organizations proposing multiple PDs/PIs, visit the Multiple Program Director/Principal Investigator Policy and submission details in the Senior/Key Person Profile (Expanded) Component of the SF424 (R&R) Application Guide.

2. Cost Sharing

This FOA does not require cost sharing as defined in the NIH Grants Policy Statement.

3. Additional Information on Eligibility

Number of Applications

Applicant organizations may submit more than one application, provided that each application is scientifically distinct.

The NIH will not accept duplicate or highly overlapping applications under review at the same time.  This means that the NIH will not accept:

  • A new (A0) application that is submitted before issuance of the summary statement from the review of an overlapping new (A0) or resubmission (A1) application.
  • A resubmission (A1) application that is submitted before issuance of the summary statement from the review of the previous new (A0) application.
  • An application that has substantial overlap with another application pending appeal of initial peer review (see NOT-OD-11-101).

Section IV. Application and Submission Information

1. Requesting an Application Package

The application forms package specific to this opportunity must be accessed through ASSIST, Grants.gov Workspace or an institutional system-to-system solution. Links to apply using ASSIST or Grants.gov Workspace are available in Part 1 of this FOA. See your administrative office for instructions if you plan to use an institutional system-to-system solution.

2. Content and Form of Application Submission

It is critical that applicants follow the instructions in the Research (R) Instructions in the SF424 (R&R) Application Guide except where instructed in this funding opportunity announcement to do otherwise. Conformance to the requirements in the Application Guide is required and strictly enforced. Applications that are out of compliance with these instructions may be delayed or not accepted for review.

Letter of Intent

Although a letter of intent is not required, is not binding, and does not enter into the review of a subsequent application, the information that it contains allows IC staff to estimate the potential review workload and plan the review.

By the date listed in Part 1. Overview Information, prospective applicants are asked to submit a letter of intent that includes the following information:

  • Descriptive title of proposed activity
  • Name(s), address(es), and telephone number(s) of the PD(s)/PI(s)
  • Names of other key personnel
  • Participating institution(s)
  • Number and title of this funding opportunity

The letter of intent should be sent to:


Email: NIMHPeerReview@mail.nih.gov

Page Limitations

All page limitations described in the SF424 Application Guide and the Table of Page Limits must be followed.

Instructions for Application Submission

The following section supplements the instructions found in the SF424 (R&R) Application Guide and should be used for preparing an application to this FOA.

SF424(R&R) Cover

All instructions in the SF424 (R&R) Application Guide must be followed.

SF424(R&R) Project/Performance Site Locations

All instructions in the SF424 (R&R) Application Guide must be followed.

SF424(R&R) Other Project Information

All instructions in the SF424 (R&R) Application Guide must be followed.

Facilities and Other Resources: The description of the resources and environment should address how the study utilizes existing infrastructure (e.g., CTSAs, practice-based research networks, electronic medical records, administrative databases, patient registries) or utilizes other available resources to increase the efficiency of participant recruitment and data collection or provide a justification in the event that such efficiencies cannot be incorporated. 

SF424(R&R) Senior/Key Person Profile

All instructions in the SF424 (R&R) Application Guide must be followed.

As appropriate, Senior/Key Personnel should demonstrate their experience and expertise at collaborating with community practice partners/providers, consumers, and relevant policy makers to conduct effectiveness studies.

R&R or Modular Budget

All instructions in the SF424 (R&R) Application Guide must be followed.

R&R Subaward Budget

All instructions in the SF424 (R&R) Application Guide must be followed.

PHS 398 Cover Page Supplement

All instructions in the SF424 (R&R) Application Guide must be followed.

PHS 398 Research Plan

All instructions in the SF424 (R&R) Application Guide must be followed, with the following additional instructions:

Research Strategy: The Research Strategy should include the following information:

Significance:

  • Justify the practical effect on public health of the intervention or service approach in terms of the estimated hypothesized effect size (in terms of key outcomes, such as clinical benefit, safety/tolerability, value and efficiency, or scalability), compared with already available approaches. Address the potential effect of the intervention/service delivery approach in terms of both (1) the empirical basis for the anticipated effect size (e.g., citing data regarding the magnitude of the association between the target and the clinical endpoint of interest and/or effect sizes obtained in prior efficacy studies), and (2) the clinical meaningfulness of the anticipated increment in effects compared to existing approaches.
  • Address the degree to which the proposed intervention/service delivery approach is scalable and could be disseminated into practice, given typically available resources (e.g., trained, skilled providers), typical service structures (including MH financing), and typical service use patterns.
  • Detail how the proposed research will generate data that will lead to a firm conclusion about the feasibility of a regular research project grant or full-scale clinical trial and provide information about the anticipated scope and goals of intended future work.

Innovation:

  • Highlight how innovative research strategies and design/analytic elements (e.g., adaptive sequential randomization, equipoise stratification, technology-based assessments) are incorporated, as appropriate, in order to enhance the study's potential for yielding practice-relevant information.
  • As relevant, describe how applications of technology or other innovative approaches are leveraged to facilitate the conduct of the research project (e.g., participant identification, data collection) and/or to increase the reach, efficiency, or effectiveness of the intervention or service delivery strategy that is being evaluated.

Approach:

  • Detail the plan to explicitly address whether the intervention engages the mechanism that is presumed to underlie the intervention effects (the mechanism that accounts for changes in clinical/functional outcomes, changes in provider behavior, etc.).Include the following: (1) a conceptual framework that clearly identifies the target(s)/mechanism(s) and the empirical evidence linking the target(s)/mechanism(s) to the clinical symptoms, functional deficits, or patient-, provider- or system-level behaviors/processes that the intervention seeks to improve; (2) plans for assessing engagement of the target(s)/mechanism(s), including the specific measures, the assessment schedule, and the justification for the assessment strategy (e.g., evidence regarding the validity and feasibility of the proposed measures in the effectiveness context); and (3) analytic strategies that will be used to examine whether the intervention engages the target(s) and to conduct a preliminary examination of whether intervention-induced changes in the target(s) are associated with clinical benefit, as appropriate in the pilot trial. In the case of multi-component interventions, the application should specify the conceptual basis, assessment plan, and analytic strategy, as detailed above, for the target(s)/mechanism(s) corresponding to each intervention component, as appropriate, in the effectiveness context.
  • Articulate the process for determining the intervention's initial degree of "fit" and the iterative process that will be used to adapt/refine the intervention for successful implementation with the target population or within the target setting.
  • Provide a clear justification for the type of experimental design chosen; the goal of this phase of research is to propose the most rigorous means of collecting data (in light of ethical or other limitations) that will inform a larger, more definitive test of the intervention. Provide a clear rationale for the choice of methods proposed (e.g., address the rationale for the decision regarding whether or not to include a control group at this stage of pilot research; justify plans to interpret observed outcomes, including feasibility, given the sample size and limitations) and describe how the results will inform the next stages of research.
  • When appropriate, for studies that involve preventive or therapeutic interventions, detail how the study takes into account RDoC or RDoC-like constructs when defining the subject eligibility (inclusion), intervention targets or mechanisms, and outcomes, as feasible in the effectiveness setting.
  • Describe provisions for the assessment and monitoring of the fidelity of intervention delivery via procedures that are feasible and valid for use in community practice settings
  • Describe design features that will be incorporated to help ensure that the approach can be feasibly implemented in practice, that it is scalable, and that it is robust against implementation drift (e.g., using technology as scaffolding or expert consultation via existing resources/ other sustainable means to support delivery), as appropriate.
  • Highlight how the approach maximizes efficiencies in effectiveness research (e.g., by utilizing existing infrastructure such as CTSAs, practice-based research networks, electronic medical records, administrative databases, patient registries, or other available resources) and describe how efficiencies are incorporated
  • Describe plans to involve collaborations and/or input from community practice partners/providers, consumers, and relevant policymakers in a manner that informs the research (e.g., to help ensure the interventions/service delivery approaches are acceptable, feasible, and scalable) and helps to ensure the results will have utility.
  • Detail plans to assess and examine consumer-, provider- and setting- level factors that might be associated with uptake, implementation fidelity, and sustained use of the approach that is being developed and tested. Describe the provider- and setting- level characteristics that will be assessed and the measures that will be used (e.g., standardized measures of provider attitudes/experience, clinic-/organizational characteristics).
  • Incorporate outcome measures that are validated and generally accepted by the field, including stakeholder-relevant outcomes (e.g., functioning, health services use), as appropriate.
  • Describe plans to collect data on potential moderators such as clinical and biological variables (e.g., blood for genetic analysis, other potential biomarkers), as appropriate, that might be used to inform or test algorithms for more prescriptive approaches in future work, if relevant.
  • For studies proposing adaptations of existing interventions, provide the empirical justification for the proposed adaptation based on data describing characteristics of client subgroups, settings, care providers or other relevant variables that are associated with non-response, partial response, relapse, or poor uptake (see page 19 of NAMHC Workgroup Report "From Discovery to Cure: Accelerating the Development of New and Personalized Interventions for Mental Illnesses."). Justify the adaptation in terms of (a) theoretical and empirical support for the adaptation target (e.g., a prognostic variable such as a neurocognitive functioning variable, that has been associated with non-response, partial response, patient non-engagement, or relapse); (b) a theoretical/empirical explanation of how the prognostic variable interferes with the response to the un-adapted intervention (i.e., a clear hypothesis regarding the mechanism by which the prognostic variable moderates response to the un-adapted intervention or functions to disadvantage a subgroup); and (c) evidence to suggest that the adapted intervention will result in a substantial improvement in response rate, speed of response, efficiency or other aspects of care, or uptake in community/practice settings when compared to the un-adapted interventions or existing intervention approaches.
  • For studies that involve the assessment of patient-level outcomes, plans are expected for the assessment of suicidal behavior and related outcomes using strategies that can facilitate integration and sharing of data (e.g., see NOT-MH-15-009 and https://www.phenxtoolkit.org/ for constructs and corresponding assessment strategies), as appropriate, or provide a rationale for excluding such measures if they are not included. Accordingly, the application should provide the rationale for the selection of suicide-related constructs and corresponding assessment instruments (e.g., measures of ideation, attempts), the time periods assessed (e.g., lifetime history, current), and the assessment schedule for administration (e.g., baseline, during intervention, post-intervention, follow up), taking into account the nature of the target population, participant burden, etc. The application should also address provisions for clinical management when suicidal behavior is reported. In situations where it is not appropriate or feasible to include assessment of suicide outcomes due to the nature of the intervention (e.g., services interventions that target provider behavior or systems-level factors), the target population (e.g., very young children), or unique issues related to participant burden or safety/monitoring concerns, the application should provide an appropriate justification for excluding these assessments.

Resource Sharing Plan: Individuals are required to comply with the instructions for the Resource Sharing Plans as provided in the SF424 (R&R) Application Guide.

The following modifications also apply:

All applications, regardless of the amount of direct costs requested for any one year, should address a Data Sharing Plan

To advance the goal of advancing research through widespread data sharing among researchers, investigators funded under this FOA are expected to share those data via the National Data Archive (NDA; see NOT-MH-19-033). Established by the NIH, NDA is a secure informatics platform for scientific collaboration and data-sharing that enables the effective communication of detailed research data, tools, and supporting documentation. NDA links data across research projects through its Global Unique Identifier (GUID) and Data Dictionary technology. Investigators funded under this FOA are expected to use these technologies to submit data to NDA.

To accomplish this objective, it will be important to formulate a) an enrollment strategy that will obtain the information necessary to generate a GUID for each participant, and b) a budget strategy that will cover the costs of data submission. The NDA web site provides two tools to help investigators develop appropriate strategies: 1) the NDA Data Submission Cost Model which offers a customizable Excel worksheet that includes tasks and hours for the Program Director/Principal Investigator and Data Manager to budget for data sharing; and 2) plain language text to be considered in your informed consent available from the NDA's Data Contribution page. Investigators are expected to certify the quality of all data generated by grants funded under this FOA prior to submission to NDA and review their data for accuracy after submission. Submission of descriptive/raw data is expected semi-annually (every January 15 and July 15); submission of all other data is expected at the time of publication, or prior to the end of the grant, whichever occurs first (see NDA Sharing Regimen for more information); Investigators are expected to share results, positive and negative, specific to the cohorts and outcome measures studied. The NDA Data Sharing Plan is available for review on the NDA website. NDA staff will work with investigators to help them submit data types not yet defined in the NDA Data Dictionary.

Appendix:
Only limited Appendix materials are allowed. Follow all instructions for the Appendix as described in the SF424 (R&R) Application Guide.
PHS Human Subjects and Clinical Trials Information

When involving human subjects research, clinical research, and/or NIH-defined clinical trials (and when applicable, clinical trials research experience) follow all instructions for the PHS Human Subjects and Clinical Trials Information form in the SF424 (R&R) Application Guide, with the following additional instructions:

If you answered “Yes” to the question “Are Human Subjects Involved?” on the R&R Other Project Information form, you must include at least one human subjects study record using the Study Record: PHS Human Subjects and Clinical Trials Information form or Delayed Onset Study record.

Study Record: PHS Human Subjects and Clinical Trials Information

All instructions in the SF424 (R&R) Application Guide must be followed.

The following additional instructions must also be followed.

Section 2 - Study Population Characteristics

2.5 Recruitment and Retention Plan

Applications must provide a clear description of:

1. Recruitment and Referral sources, including detailed descriptions of the census/rate of new cases and anticipated yield of eligible participants from each source;

2. Procedures that will be used to monitor enrollment and track/retain participants for follow-up assessments;

3. Strategies that will be used to ensure a diverse, representative sample;

4. Potential recruitment/enrollment challenges and strategies that can be implemented in the event of enrollment shortfalls (e.g., additional outreach procedures, alternate/back-up referral sources);

5. Evidence to support the feasibility of enrollment, including descriptions of prior experiences and yield from research efforts employing similar referral sources and/or strategies.

2.7 Study Timeline

Applications must provide a timeline for reaching important study benchmarks such as: (1) finalizing the study procedures and training participating clinical site staff; (2) finalizing the intervention manual and assessment protocols, including fidelity measures/procedures, where applicable; (3) enrollment benchmarks; (4) completing all subject assessments and data collection activities, including data quality checks; (5) analyzing and interpreting results; and (6) preparing de-identified data and relevant documentation to facilitate data sharing, as appropriate.

Section 5 - Other Clinical Trial-Related Attachments

5.1 Other Clinical Trial- Related Attachments

Applicants must upload the attachments for Intervention Manual/Materials, as applicable. If more than one set of Intervention Manual/Materials are used, they should be combined in this attachment. Applicants must use the "Intervention Manual/Materials" to name this other attachments files. As appropriate, this may include screenshots of mobile interventions, technological specifications, training manuals or treatment algorithms.

Delayed Onset Study

Note: Delayed onset does NOT apply to a study that can be described but will not start immediately (i.e., delayed start).All instructions in the SF424 (R&R) Application Guide must be followed.

PHS Assignment Request Form

All instructions in the SF424 (R&R) Application Guide must be followed.

Foreign Institutions

Foreign (non-U.S.) institutions must follow policies described in the NIH Grants Policy Statement, and procedures for foreign institutions described throughout the SF424 (R&R) Application Guide.

3. Unique Entity Identifier and System for Award Management (SAM)

See Part 1. Section III.1 for information regarding the requirement for obtaining a unique entity identifier and for completing and maintaining active registrations in System for Award Management (SAM), NATO Commercial and Government Entity (NCAGE) Code (if applicable), eRA Commons, and Grants.gov.

4. Submission Dates and Times

Part I. Overview Information contains information about Key Dates and times. Applicants are encouraged to submit applications before the due date to ensure they have time to make any application corrections that might be necessary for successful submission. When a submission date falls on a weekend or Federal holiday, the application deadline is automatically extended to the next business day.

Organizations must submit applications to Grants.gov (the online portal to find and apply for grants across all Federal agencies). Applicants must then complete the submission process by tracking the status of the application in the eRA Commons, NIH’s electronic system for grants administration. NIH and Grants.gov systems check the application against many of the application instructions upon submission. Errors must be corrected and a changed/corrected application must be submitted to Grants.gov on or before the application due date and time.  If a Changed/Corrected application is submitted after the deadline, the application will be considered late. Applications that miss the due date and time are subjected to the NIH Policy on Late Application Submission.

Applicants are responsible for viewing their application before the due date in the eRA Commons to ensure accurate and successful submission.

Information on the submission process and a definition of on-time submission are provided in the SF424 (R&R) Application Guide.

5. Intergovernmental Review (E.O. 12372)

This initiative is not subject to intergovernmental review.

6. Funding Restrictions

All NIH awards are subject to the terms and conditions, cost principles, and other considerations described in the NIH Grants Policy Statement.

Pre-award costs are allowable only as described in the NIH Grants Policy Statement.

7. Other Submission Requirements and Information

Applications must be submitted electronically following the instructions described in the SF424 (R&R) Application Guide.  Paper applications will not be accepted.

Applicants must complete all required registrations before the application due date. Section III. Eligibility Information contains information about registration.

For assistance with your electronic application or for more information on the electronic submission process, visit How to Apply – Application Guide. If you encounter a system issue beyond your control that threatens your ability to complete the submission process on-time, you must follow the Dealing with System Issues guidance. For assistance with application submission, contact the Application Submission Contacts in Section VII.

Important reminders:

All PD(s)/PI(s) must include their eRA Commons ID in the Credential field of the Senior/Key Person Profile Component of the SF424(R&R) Application Package. Failure to register in the Commons and to include a valid PD/PI Commons ID in the credential field will prevent the successful submission of an electronic application to NIH. See Section III of this FOA for information on registration requirements.

The applicant organization must ensure that the DUNS number it provides on the application is the same number used in the organization’s profile in the eRA Commons and for the System for Award Management. Additional information may be found in the SF424 (R&R) Application Guide.

See more tips for avoiding common errors.

Upon receipt, applications will be evaluated for completeness and compliance with application instructions by the Center for Scientific Review and responsiveness by NIMH. Applications that are incomplete, non-compliant and/or nonresponsive will not be reviewed.

NIMH encourages the use of common data elements (CDEs) in basic, clinical, and applied research, patient registries, and other human subject research to facilitate broader and more effective use of data and advance research across studies (See NOT-MH-20-067: "Notice Announcing the National Institute of Mental Health (NIMH) Expectations for Collection of Common Data Elements").

Use of Common Data Elements in NIH-funded Research

Many NIH ICs encourage the use of common data elements (CDEs) in basic, clinical, and applied research, patient registries, and other human subject research to facilitate broader and more effective use of data and advance research across studies. CDEs are data elements that have been identified and defined for use in multiple data sets across different studies. Use of CDEs can facilitate data sharing and standardization to improve data quality and enable data integration from multiple studies and sources, including electronic health records. NIH ICs have identified CDEs for many clinical domains (e.g., neurological disease), types of studies (e.g. genome-wide association studies (GWAS)), types of outcomes (e.g., patient-reported outcomes), and patient registries (e.g., the Global Rare Diseases Patient Registry and Data Repository). NIH has established a “Common Data Element (CDE) Resource Portal" (http://cde.nih.gov/) to assist investigators in identifying NIH-supported CDEs when developing protocols, case report forms, and other instruments for data collection. The Portal provides guidance about and access to NIH-supported CDE initiatives and other tools and resources for the appropriate use of CDEs and data standards in NIH-funded research. Investigators are encouraged to consult the Portal and describe in their applications any use they will make of NIH-supported CDEs in their projects.

Post Submission Materials

Applicants are required to follow the instructions for post-submission materials, as described in the policy. Any instructions provided here are in addition to the instructions in the policy.

Section V. Application Review Information

1. Criteria

Only the review criteria described below will be considered in the review process.  Applications submitted to the NIH in support of the NIH mission are evaluated for scientific and technical merit through the NIH peer review system.

A proposed Clinical Trial application may include study design, methods, and intervention that are not by themselves innovative but address important questions or unmet needs. Additionally, the results of the clinical trial may indicate that further clinical development of the intervention is unwarranted or lead to new avenues of scientific investigation.

Overall Impact

Reviewers will provide an overall impact score to reflect their assessment of the likelihood for the project to exert a sustained, powerful influence on the research field(s) involved, in consideration of the following review criteria and additional review criteria (as applicable for the project proposed).

Scored Review Criteria

Reviewers will consider each of the review criteria below in the determination of scientific merit, and give a separate score for each. An application does not need to be strong in all categories to be judged likely to have major scientific impact. For example, a project that by its nature is not innovative may be essential to advance a field.

Significance

Does the project address an important problem or a critical barrier to progress in the field? Is the prior research that serves as the key support for the proposed project rigorous? If the aims of the project are achieved, how will scientific knowledge, technical capability, and/or clinical practice be improved? How will successful completion of the aims change the concepts, methods, technologies, treatments, services, or preventative interventions that drive this field?

Does the application adequately justify the significance of the research in terms of the potential effect of the proposed intervention/service delivery approach being studied and the anticipated improvement in effect size, safety/tolerability profile, value/efficiency, and scalability or dissemination potential, as compared to existing approaches? Does the application adequately address both (1) the empirical basis for the anticipated effect size (e.g., citing data regarding the magnitude of the association between the target and the clinical endpoint of interest and/or effect sizes obtained in prior efficacy studies), and (2) the clinical meaningfulness of the anticipated increment in effects compared to existing approaches? 

If the approach is successful, is it scalable and could it be disseminated into practice given typically available resources (e.g., trained, skilled providers), typical service structures (including MH financing), and typical service use patterns?

How likely is it that the proposed research will generate data that will lead to a firm conclusion about the feasibility of a regular research project grant or full scale clinical trial? 

 Does the application describe how the proposed pilot effectiveness study and data collection will inform future work?

Are the scientific rationale and need for a clinical trial to test the proposed hypothesis or intervention well supported by preliminary data, clinical and/or preclinical studies, or information in the literature or knowledge of biological mechanisms? For trials focusing on clinical or public health endpoints, is this clinical trial necessary for testing the safety, efficacy or effectiveness of an intervention that could lead to a change in clinical practice, community behaviors or health care policy? For trials focusing on mechanistic, behavioral, physiological, biochemical, or other biomedical endpoints, is this trial needed to advance scientific understanding?

Investigator(s)

Are the PD(s)/PI(s), collaborators, and other researchers well suited to the project? If Early Stage Investigators or those in the early stages of independent careers, do they have appropriate experience and training? If established, have they demonstrated an ongoing record of accomplishments that have advanced their field(s)? If the project is collaborative or multi-PD/PI, do the investigators have complementary and integrated expertise; are their leadership approach, governance and organizational structure appropriate for the project?

With regard to the proposed leadership for the project, do the PD/PI(s) and key personnel have the expertise, experience, and ability to organize, manage and implement the proposed clinical trial and meet milestones and timelines? Do they have appropriate expertise in study coordination, data management and statistics? For a multicenter trial, is the organizational structure appropriate and does the application identify a core of potential center investigators and staffing for a coordinating center?

Innovation

Does the application challenge and seek to shift current research or clinical practice paradigms by utilizing novel theoretical concepts, approaches or methodologies, instrumentation, or interventions? Are the concepts, approaches or methodologies, instrumentation, or interventions novel to one field of research or novel in a broad sense? Is a refinement, improvement, or new application of theoretical concepts, approaches or methodologies, instrumentation, or interventions proposed?

As relevant, evaluate the extent to which applications of technology (e.g., digital health or mhealth approaches) or other innovative approaches are leveraged to facilitate the conduct of the research project (e.g., use of Electronic Health Records or other administrative data to facilitate participant identification, data collection) and/or to increase the reach, efficiency, or effectiveness of the intervention or service delivery strategy that is being evaluated.

Examples of innovative elements could include adaptive sequential randomization, equipoise stratification, or technology assisted assessment and others.

 

Does the design/research plan include innovative elements, as appropriate, that enhance its sensitivity, potential for information or potential to advance scientific knowledge or clinical practice?

Approach

Are the overall strategy, methodology, and analyses well-reasoned and appropriate to accomplish the specific aims of the project? Have the investigators included plans to address weaknesses in the rigor of prior research that serves as the key support for the proposed project? Have the investigators presented strategies to ensure a robust and unbiased approach, as appropriate for the work proposed? Are potential problems, alternative strategies, and benchmarks for success presented? If the project is in the early stages of development, will the strategy establish feasibility and will particularly risky aspects be managed? Have the investigators presented adequate plans to address relevant biological variables, such as sex, for studies in vertebrate animals or human subjects?

If the project involves human subjects and/or NIH-defined clinical research, are the plans to address 1) the protection of human subjects from research risks, and 2) inclusion (or exclusion) of individuals on the basis of sex/gender, race, and ethnicity, as well as the inclusion or exclusion of individuals of all ages (including children and older adults), justified in terms of the scientific goals and research strategy proposed?

How well does the study design explicitly address whether the intervention engages the mechanism that is presumed to underlie the intervention effects (the mechanism that account for changes in clinical/ functional outcomes, changes in provider behavior, etc.), so that the results will inform understanding of change mechanisms and inform decisions about whether further effectiveness testing is warranted? To what extent does the application include (1) a well-supported conceptual framework that clearly identifies the target(s)/mechanism(s) and the empirical evidence linking the target(s)/mechanism(s) to the clinical symptoms, functional deficits, or patient-, provider- or system-level behaviors/processes that the intervention seeks to improve; (2) well justified plans for assessing engagement of the target(s)/mechanism(s), including the specific measures, the assessment schedule, and the justification for the assessment strategy (e.g., evidence regarding the validity and feasibility of the proposed measures in the effectiveness context); and (3) appropriate analytic strategies that will be used to examine whether the intervention engages the target(s) and to conduct a preliminary examination of whether intervention-induced changes in the target(s) are associated with clinical benefit, as appropriate in the pilot trial? In the case of multi-component interventions, how well does the application specify the conceptual basis, assessment plan, and analytic strategy, as detailed above, for the target(s)/mechanism(s) corresponding to each intervention component, as appropriate in the effectiveness context?

Is there an adequate description of the steps for intervention development/refinement and a clear rationale for the choice of methods proposed? For example, does the application articulate a clear process for determining the intervention's initial degree of "fit" and the iterative process that will be used to adapt/refine the intervention for successful implementation with the target population or within the target setting?

Is there an appropriate rationale for the decision regarding whether or not to include a control group at this stage of pilot research? Is there appropriate caution regarding plans to interpret observed outcomes given sample size limitations? Does the application adequately outline the appropriateness of the sample size to accomplish the feasibility goals?

When appropriate, for studies that involve preventive or therapeutic interventions, does the study take into account RDoC or RDoC-like constructs when defining the subject eligibility (inclusion), intervention targets or mechanisms, and outcomes, as feasible in the effectiveness setting?

Does the application include provisions for the assessment and monitoring of the fidelity of intervention delivery via procedures that are feasible and valid for use in community practice settings, as appropriate?

As appropriate, to what extent does the approach incorporate design features that will help ensure that the intervention can be feasibly implemented in practice, that it is scalable, and that it is robust against implementation drift (e.g., using technology as scaffolding or expert consultation via existing resources/ other sustainable means to support delivery)?

Does the application include plans to involve collaborations and/or input from community practice partners/providers, consumers, and relevant policy makers in a manner that informs the research (e.g., to help ensure the interventions/service delivery approaches are acceptable, feasible, and scalable) and helps to ensure the results will have utility?

To what extent does the application include plans to assess and examine consumer-, provider- and setting- level factors that might be associated with uptake, implementation fidelity, and sustained use of the approach that is being developed and tested? How well does the application describe the provider- and setting- level characteristics that will be assessed and the measures that will be used (e.g., standardized measures of provider attitudes/experience, clinic-/organizational characteristics)?

Are proposed outcome measures validated and generally accepted by the field; are stakeholder-relevant outcomes included, as appropriate (e.g., functioning, health services use)?

Does the study explore the feasibility of collecting data on potential moderators such as clinical and biological variables (e.g., blood for genetic analysis, other potential biomarkers), as appropriate, that might be used to inform or test algorithms for more prescriptive approaches?

Are the data and biospecimens collected in a manner that could ultimately facilitate appropriate data sharing and integration into larger databases (e.g., use of common data elements), consistent with the goal of advancing effectiveness research?

For studies proposing adaptations of existing interventions for broader use, is the justification for the proposed adaptation based on data describing characteristics of client subgroups, settings, care providers or other relevant variables that are associated with non-response, partial response, relapse, or poor uptake?

Does the application adequately address the following, if applicable

Study Design

Is the study design justified and appropriate to address primary and secondary outcome variable(s)/endpoints that will be clear, informative and relevant to the hypothesis being tested? Is the scientific rationale/premise of the study based on previously well-designed preclinical and/or clinical research? Given the methods used to assign participants and deliver interventions, is the study design adequately powered to answer the research question(s), test the proposed hypothesis/hypotheses, and provide interpretable results? Is the trial appropriately designed to conduct the research efficiently? Are the study populations (size, gender, age, demographic group), proposed intervention arms/dose, and duration of the trial, appropriate and well justified?

Are potential ethical issues adequately addressed? Is the process for obtaining informed consent or assent appropriate? Is the eligible population available? Are the plans for recruitment outreach, enrollment, retention, handling dropouts, missed visits, and losses to follow-up appropriate to ensure robust data collection? Are the planned recruitment timelines feasible and is the plan to monitor accrual adequate? Has the need for randomization (or not), masking (if appropriate), controls, and inclusion/exclusion criteria been addressed? Are differences addressed, if applicable, in the intervention effect due to sex/gender and race/ethnicity?

Are the plans to standardize, assure quality of, and monitor adherence to, the trial protocol and data collection or distribution guidelines appropriate? Is there a plan to obtain required study agent(s)? Does the application propose to use existing available resources, as applicable?

Data Management and Statistical Analysis

Are planned analyses and statistical approach appropriate for the proposed study design and methods used to assign participants and deliver interventions? Are the procedures for data management and quality control of data adequate at clinical site(s) or at center laboratories, as applicable? Have the methods for standardization of procedures for data management to assess the effect of the intervention and quality control been addressed? Is there a plan to complete data analysis within the proposed period of the award?

Environment

Will the scientific environment in which the work will be done contribute to the probability of success? Are the institutional support, equipment and other physical resources available to the investigators adequate for the project proposed? Will the project benefit from unique features of the scientific environment, subject populations, or collaborative arrangements?

As appropriate, are the plans achievable for establishing necessary agreements with all partners (e.g., single IRB) in a timely manner?

If proposed, are the administrative, data coordinating, enrollment and laboratory/testing centers, appropriate for the trial proposed?

Does the application adequately address the capability and ability to conduct the trial at the proposed site(s) or centers? Are the plans to add or drop enrollment centers, as needed, appropriate?

If international site(s) is/are proposed, does the application adequately address the complexity of executing the clinical trial?

If multi-sites/centers, is there evidence of the ability of the individual site or center to: (1) enroll the proposed numbers; (2) adhere to the protocol; (3) collect and transmit data in an accurate and timely fashion; and, (4) operate within the proposed organizational structure?

Additional Review Criteria

As applicable for the project proposed, reviewers will evaluate the following additional items while determining scientific and technical merit, and in providing an overall impact score, but will not give separate scores for these items.

Study Timeline


Is the study timeline described in detail, taking into account start-up activities, the anticipated rate of enrollment, and planned follow-up assessment? Is the projected timeline feasible and well justified? Does the project incorporate efficiencies and utilize existing resources (e.g., CTSAs, practice-based research networks, electronic medical records, administrative database, or patient registries) to increase the efficiency of participant enrollment and data collection, as appropriate?

Are potential challenges and corresponding solutions discussed (e.g., strategies that can be implemented in the event of enrollment shortfalls)?

Protections for Human Subjects

For research that involves human subjects but does not involve one of the categories of research that are exempt under 45 CFR Part 46, the committee will evaluate the justification for involvement of human subjects and the proposed protections from research risk relating to their participation according to the following five review criteria: 1) risk to subjects, 2) adequacy of protection against risks, 3) potential benefits to the subjects and others, 4) importance of the knowledge to be gained, and 5) data and safety monitoring for clinical trials.

For research that involves human subjects and meets the criteria for one or more of the categories of research that are exempt under 45 CFR Part 46, the committee will evaluate: 1) the justification for the exemption, 2) human subjects involvement and characteristics, and 3) sources of materials. For additional information on review of the Human Subjects section, please refer to the Guidelines for the Review of Human Subjects.

Inclusion of Women, Minorities, and Individuals Across the Lifespan

When the proposed project involves human subjects and/or NIH-defined clinical research, the committee will evaluate the proposed plans for the inclusion (or exclusion) of individuals on the basis of sex/gender, race, and ethnicity, as well as the inclusion (or exclusion) of individuals of all ages (including children and older adults) to determine if it is justified in terms of the scientific goals and research strategy proposed. For additional information on review of the Inclusion section, please refer to the Guidelines for the Review of Inclusion in Clinical Research.

Vertebrate Animals

Generally Not Applicable.  Reviewers should bring any concerns to the attention of the Scientific Review Officer

Biohazards

Reviewers will assess whether materials or procedures proposed are potentially hazardous to research personnel and/or the environment, and if needed, determine whether adequate protection is proposed.

Resubmissions

For Resubmissions, the committee will evaluate the application as now presented, taking into consideration the responses to comments from the previous scientific review group and changes made to the project.

Renewals

Not Applicable

Revisions

For Revisions, the committee will consider the appropriateness of the proposed expansion of the scope of the project. If the Revision application relates to a specific line of investigation presented in the original application that was not recommended for approval by the committee, then the committee will consider whether the responses to comments from the previous scientific review group are adequate and whether substantial changes are clearly evident.

Additional Review Considerations

As applicable for the project proposed, reviewers will consider each of the following items, but will not give scores for these items, and should not consider them in providing an overall impact score.

Applications from Foreign Organizations

Reviewers will assess whether the project presents special opportunities for furthering research programs through the use of unusual talent, resources, populations, or environmental conditions that exist in other countries and either are not readily available in the United States or augment existing U.S. resources.

Select Agent Research

Reviewers will assess the information provided in this section of the application, including 1) the Select Agent(s) to be used in the proposed research, 2) the registration status of all entities where Select Agent(s) will be used, 3) the procedures that will be used to monitor possession use and transfer of Select Agent(s), and 4) plans for appropriate biosafety, biocontainment, and security of the Select Agent(s).

Resource Sharing Plans

Reviewers will comment on whether the following Resource Sharing Plans, or the rationale for not sharing the following types of resources, are reasonable: (1) Data Sharing Plan; (2) Sharing Model Organisms; and (3)  Genomic Data Sharing Plan (GDS).

Authentication of Key Biological and/or Chemical Resources:

For projects involving key biological and/or chemical resources, reviewers will comment on the brief plans proposed for identifying and ensuring the validity of those resources.

Budget and Period of Support

Reviewers will consider whether the budget and the requested period of support are fully justified and reasonable in relation to the proposed research.

2. Review and Selection Process

Applications will be evaluated for scientific and technical merit by (an) appropriate Scientific Review Group(s) convened by NIMH, in accordance with NIH peer review policy and procedures, using the stated review criteria. Assignment to a Scientific Review Group will be shown in the eRA Commons.

As part of the scientific peer review, all applications will receive a written critique.

Applications may undergo a selection process in which only those applications deemed to have the highest scientific and technical merit (generally the top half of applications under review) will be discussed and assigned an overall impact score.

Applications will be assigned on the basis of established PHS referral guidelines to the appropriate NIH Institute or Center. Applications will compete for available funds with all other recommended applications . Following initial peer review, recommended applications will receive a second level of review by the appropriate national Advisory Council or Board. The following will be considered in making funding decisions:

  • Scientific and technical merit of the proposed project as determined by scientific peer review.
  • Availability of funds.
  • Relevance of the proposed project to program priorities.

3. Anticipated Announcement and Award Dates

After the peer review of the application is completed, the PD/PI will be able to access his or her Summary Statement (written critique) via the eRA Commons. Refer to Part 1 for dates for peer review, advisory council review, and earliest start date.

Information regarding the disposition of applications is available in the NIH Grants Policy Statement.

Section VI. Award Administration Information

1. Award Notices

If the application is under consideration for funding, NIH will request "just-in-time" information from the applicant as described in the NIH Grants Policy Statement.

A formal notification in the form of a Notice of Award (NoA) will be provided to the applicant organization for successful applications. The NoA signed by the grants management officer is the authorizing document and will be sent via email to the recipient's business official.

Awardees must comply with any funding restrictions described in Section IV.5. Funding Restrictions. Selection of an application for award is not an authorization to begin performance. Any costs incurred before receipt of the NoA are at the recipient's risk. These costs may be reimbursed only to the extent considered allowable pre-award costs.

Any application awarded in response to this FOA will be subject to terms and conditions found on the Award Conditions and Information for NIH Grants website.  This includes any recent legislation and policy applicable to awards that is highlighted on this website.

The NIMH has published policies and guidance for investigators regarding human research protection, data and safety monitoring, Independent Safety Monitors and Data and Safety Monitoring Boards, reportable events, and participant recruitment monitoring (NOT-MH-19-027). The application’s PHS Human Subjects and Clinical Trials Information should reflect the manner in which these policies will be implemented for each study record. These plans will be reviewed by the NIMH for consistency with NIMH and NIH policies and federal regulations. The NIMH will expect clinical trials to be conducted in accordance with these policies including, but not limited to: timely registration to ClinicalTrials.gov, submission of review determinations from the clinical trial’s data and safety monitoring entity (at least annually), timely submission of reportable events as prescribed, and establishment of recruitment milestones and progress reporting.

Individual awards are based on the application submitted to, and as approved by, the NIH and are subject to the IC-specific terms and conditions identified in the NoA.

ClinicalTrials.gov: If an award provides for one or more clinical trials. By law (Title VIII, Section 801 of Public Law 110-85), the "responsible party" must register and submit results information for certain “applicable clinical trials” on the ClinicalTrials.gov Protocol Registration and Results System Information Website (https://register.clinicaltrials.gov). NIH expects registration and results reporting of all trials whether required under the law or not. For more information, see https://grants.nih.gov/policy/clinical-trials/reporting/index.htm

Institutional Review Board or Independent Ethics Committee Approval: Recipient institutions must ensure that all protocols are reviewed by their IRB or IEC. To help ensure the safety of participants enrolled in NIH-funded studies, the awardee must provide NIH copies of documents related to all major changes in the status of ongoing protocols.

Data and Safety Monitoring Requirements: The NIH policy for data and safety monitoring requires oversight and monitoring of all NIH-conducted or -supported human biomedical and behavioral intervention studies (clinical trials) to ensure the safety of participants and the validity and integrity of the data. Further information concerning these requirements is found at http://grants.nih.gov/grants/policy/hs/data_safety.htm and in the application instructions (SF424 (R&R) and PHS 398).

Investigational New Drug or Investigational Device Exemption Requirements: Consistent with federal regulations, clinical research projects involving the use of investigational therapeutics, vaccines, or other medical interventions (including licensed products and devices for a purpose other than that for which they were licensed) in humans under a research protocol must be performed under a Food and Drug Administration (FDA) investigational new drug (IND) or investigational device exemption (IDE).

2. Administrative and National Policy Requirements

All NIH grant and cooperative agreement awards include the NIH Grants Policy Statement as part of the NoA. For these terms of award, see the NIH Grants Policy Statement Part II: Terms and Conditions of NIH Grant Awards, Subpart A: General and Part II: Terms and Conditions of NIH Grant Awards, Subpart B: Terms and Conditions for Specific Types of Grants, Recipients, and Activities. More information is provided at Award Conditions and Information for NIH Grants.

Recipients of federal financial assistance (FFA) from HHS must administer their programs in compliance with federal civil rights laws that prohibit discrimination on the basis of race, color, national origin, disability, age and, in some circumstances, religion, conscience, and sex. This includes ensuring programs are accessible to persons with limited English proficiency. The HHS Office for Civil Rights provides guidance on complying with civil rights laws enforced by HHS. Please see https://www.hhs.gov/civil-rights/for-providers/provider-obligations/index.html and http://www.hhs.gov/ocr/civilrights/understanding/section1557/index.html.

HHS recognizes that research projects are often limited in scope for many reasons that are nondiscriminatory, such as the principal investigator’s scientific interest, funding limitations, recruitment requirements, and other considerations. Thus, criteria in research protocols that target or exclude certain populations are warranted where nondiscriminatory justifications establish that such criteria are appropriate with respect to the health or safety of the subjects, the scientific study design, or the purpose of the research. For additional guidance regarding how the provisions apply to NIH grant programs, please contact the Scientific/Research Contact that is identified in Section VII under Agency Contacts of this FOA.

Please contact the HHS Office for Civil Rights for more information about obligations and prohibitions under federal civil rights laws at https://www.hhs.gov/ocr/about-us/contact-us/index.html or call 1-800-368-1019 or TDD 1-800-537-7697.

In accordance with the statutory provisions contained in Section 872 of the Duncan Hunter National Defense Authorization Act of Fiscal Year 2009 (Public Law 110-417), NIH awards will be subject to the Federal Awardee Performance and Integrity Information System (FAPIIS) requirements. FAPIIS requires Federal award making officials to review and consider information about an applicant in the designated integrity and performance system (currently FAPIIS) prior to making an award. An applicant, at its option, may review information in the designated integrity and performance systems accessible through FAPIIS and comment on any information about itself that a Federal agency previously entered and is currently in FAPIIS. The Federal awarding agency will consider any comments by the applicant, in addition to other information in FAPIIS, in making a judgement about the applicant’s integrity, business ethics, and record of performance under Federal awards when completing the review of risk posed by applicants as described in 45 CFR Part 75.205 “Federal awarding agency review of risk posed by applicants.” This provision will apply to all NIH grants and cooperative agreements except fellowships.

Cooperative Agreement Terms and Conditions of Award

Not Applicable

3. Reporting

When multiple years are involved, awardees will be required to submit the Research Performance Progress Report (RPPR) annually and financial statements as required in the NIH Grants Policy Statement.

A final RPPR, invention statement, and the expenditure data portion of the Federal Financial Report are required for closeout of an award, as described in the NIH Grants Policy Statement.

The Federal Funding Accountability and Transparency Act of 2006 (Transparency Act), includes a requirement for awardees of Federal grants to report information about first-tier subawards and executive compensation under Federal assistance awards issued in FY2011 or later.  All awardees of applicable NIH grants and cooperative agreements are required to report to the Federal Subaward Reporting System (FSRS) available at www.fsrs.gov on all subawards over $25,000.  See the NIH Grants Policy Statement for additional information on this reporting requirement.

In accordance with the regulatory requirements provided at 45 CFR 75.113 and Appendix XII to 45 CFR Part 75, recipients that have currently active Federal grants, cooperative agreements, and procurement contracts from all Federal awarding agencies with a cumulative total value greater than $10,000,000 for any period of time during the period of performance of a Federal award, must report and maintain the currency of information reported in the System for Award Management (SAM) about civil, criminal, and administrative proceedings in connection with the award or performance of a Federal award that reached final disposition within the most recent five-year period.  The recipient must also make semiannual disclosures regarding such proceedings. Proceedings information will be made publicly available in the designated integrity and performance system (currently FAPIIS).  This is a statutory requirement under section 872 of Public Law 110-417, as amended (41 U.S.C. 2313).  As required by section 3010 of Public Law 111-212, all information posted in the designated integrity and performance system on or after April 15, 2011, except past performance reviews required for Federal procurement contracts, will be publicly available.  Full reporting requirements and procedures are found in Appendix XII to 45 CFR Part 75 – Award Term and Conditions for Recipient Integrity and Performance Matters.

Section VII. Agency Contacts

We encourage inquiries concerning this funding opportunity and welcome the opportunity to answer questions from potential applicants.

Application Submission Contacts

eRA Service Desk (Questions regarding ASSIST, eRA Commons, application errors and warnings, documenting system problems that threaten submission by the due date, and post-submission issues)

Finding Help Online: http://grants.nih.gov/support/ (preferred method of contact)
Telephone: 301-402-7469 or 866-504-9552 (Toll Free)

General Grants Information (Questions regarding application instructions, application processes, and NIH grant resources)
Email: GrantsInfo@nih.gov (preferred method of contact)
Telephone: 301-945-7573

Grants.gov Customer Support (Questions regarding Grants.gov registration and Workspace)
Contact Center Telephone: 800-518-4726
Email: support@grants.gov

Scientific/Research Contact(s)

For Studies involving therapeutic and preventive interventions:

Joel Sherrill, Ph.D.
National Institute of Mental Health (NIMH)
Telephone: 301-443-2477
Email: jsherril@mail.nih.gov

For Studies involving services interventions:

Michael Freed, Ph.D.
National Institute of Mental Health (NIMH)
Telephone: 301-443-3747
Email: michael.freed@nih.gov

Peer Review Contact(s)

Nick Gaiano, Ph.D.
National Institute of Mental Health (NIMH)
Telephone: 301-827-3420
Email: nick.gaiano@nih.gov

Financial/Grants Management Contact(s)

Tamara Kees
National Institute of Mental Health (NIMH)
Telephone: 301-443-8811
Email: tkees@mail.nih.gov

Section VIII. Other Information

Recently issued trans-NIH policy notices may affect your application submission. A full list of policy notices published by NIH is provided in the NIH Guide for Grants and Contracts. All awards are subject to the terms and conditions, cost principles, and other considerations described in the NIH Grants Policy Statement.

Authority and Regulations

Awards are made under the authorization of Sections 301 and 405 of the Public Health Service Act as amended (42 USC 241 and 284) and under Federal Regulations 42 CFR Part 52 and 45 CFR Part 75.


Weekly TOC for this Announcement
NIH Funding Opportunities and Notices
NIH Office of Extramural Research Logo
Department of Health and Human Services (HHS) - Home Page
Department of Health
and Human Services (HHS)
USA.gov - Government Made Easy
NIH... Turning Discovery Into Health®


Note: For help accessing PDF, RTF, MS Word, Excel, PowerPoint, Audio or Video files, see Help Downloading Files.