Research on Chatbots and their Usage

When beginning your next investigator-initiated application, consider the following NIH highlighted topic. The area of science described below is of interest to the listed NIH Institutes, Centers, and Offices (ICOs). This is not a notice of funding opportunity (NOFO).

Apply through an appropriate NIH Parent Funding Announcement or another broad NIH opportunity available on Grants.gov. Learn how to interpret and use Highlighted Topics.

Topic Description

Post Date: April 15, 2026

Expiration Date: April 15, 2027

Background

Conversational chatbots are increasingly integrated into daily life, including use in health-related information seeking, decision support, social interaction, and informal caregiving contexts. These systems are now used by individuals to interpret symptoms, manage chronic conditions, make financial and health decisions, and mitigate social isolation, often without professional oversight.

Use of chatbot technologies is expanding rapidly, and patterns of adoption, benefit, and risk are likely to differ across populations, contexts, and system designs. At present, there is insufficient evidence regarding the benefits and harms of chatbot use for health and well-being or to characterize unintended consequences such as misinformation, over-reliance, altered decision-making, or delayed engagement with professional care.

Rigorous research is needed to understand how chatbot design (including potential built-in safeguards), chatbot use (including but not limited to context and frequency of use), and user characteristics interact to shape health-relevant outcomes. Efforts should inform safeguards, standards, and evidence-based guidance.

Purpose

This topic encourages multidisciplinary research that identifies, measures, and explains the benefits and harms associated with chatbot use (both chatbot-based interventions or treatments as well as routine chatbot use) across a variety of populations, use cases, and settings. Of interest are studies that move beyond proof-of-concept to characterize mechanisms, safety, and impacts on behavior, decision-making, and health outcomes.

 Of particular interest are studies that:

  • Identify risk mechanisms: including automation bias, persuasive effects, misinformation exposure, behavioral dependency, and substitution for professional care, and document adverse outcomes across different chatbot designs, and health or wellbeing-related purposes.
  • Compare safety and performance across chatbot models: including adaptive versus static designs and varying degrees of personalization, information retrieval, or relational engagement.
  • Examine impacts on decision-making and autonomy: including how chatbot use influences judgment, independence, help-seeking behavior, and relationships with family, caregivers, and clinicians.
  • Characterize health and functional outcomes associated with patterns of use: including frequency, duration, and longitudinal engagement, with attention to cognitive, emotional, social, and behavioral consequences.
  • Develop and test safeguards: including monitoring strategies and design features that promote safe, ethical, secure, and beneficial use, particularly for populations at elevated risk of harm.

Ideally, studies will focus on advancing understanding of causal mechanisms driving observed or hypothesized effects and emphasize methodological rigor, appropriate comparators, and real-world relevance.

Participating ICOs

National Institute on Aging (NIA)

NIA is interested in research that advances understanding of both benefits and harms of chatbot use by middle aged and older adults, including those with cognitive impairment, and their caregivers.

Areas of interest include, but are not limited to:

  • Examinations of benefits, safety, and unintended consequences of chatbot use, including responses to high-risk situations and how use may affect timely access to appropriate professional care.
  • Investigations of impacts on decision-making and autonomy, including over-reliance, erosion or support of independence, and effects on caregiver dynamics.
  • Characterizations of patterns of use over time, including how sustained interaction with chatbots influences cognitive engagement, social connection, dependency, functional trajectories, and health outcomes.
  • Development and evaluation of ethical, technical, and organizational safeguards, including monitoring, transparency, and escalation pathways.
ICO Scientific Contact:
Joe Chiarenzelli, MPH
[email protected]

Luke Stoeckel, Ph.D.
[email protected]

Marcel Salive, MD, MPH
[email protected]

National Center for Complementary and Integrative Health (NCCIH)

NCCIH is interested in research to examine the benefits and harms of incorporating chatbot into complementary and integrative health (CIH) approaches and interventions. Areas of interest include, but are not limited to:

  • Examine benefits, safety, and unintended consequences of chatbot use to deliver CIH interventions to alleviate stress, manage physical and/or mental health symptoms, or enhance whole person health outcomes.
  • Examine impacts on decision-making, including how chatbot use for CIH interventions affects other help-seeking behavior.
  • Characterize patterns of chatbot use in CIH interventions over time, including how sustained interaction with chatbots for CIH interventions influences mental, emotional, social and behavioral health outcomes, particularly in youth.
  • Develop and test safeguards to promote safe, ethical, secure and beneficial use of chatbots to deliver CIH interventions across the lifespan.
IC may give special consideration to support meritorious applications in this topic area.
ICO Scientific Contact:
Beda Jean-Francois
[email protected]

National Cancer Institute (NCI)

NCI is interested in research informing responsible development and use of chatbots by optimizing benefits and reducing unintended consequences across the cancer control continuum.

Topics of interest include:

  • Understanding use of chatbots in cancer prevention and control, specifically cancer information, social and psychological support, symptom management, and assistance with cancer screening and treatment decision-making.
  • Investigating the impact of chatbot use in clinical settings, including decision support, patient–clinician communication, and evaluating downstream effects on care quality and health outcomes.
  • Methods evaluating the safety, efficacy, and real-world impact of chatbot use in cancer care delivery (e.g., benchmarking, model drift, hallucinations).
  • Developing and testing chatbot interventions using sustained co-design with cancer patients and caregivers, including methods to advance patient-driven approaches and generative AI literacy.
ICO Scientific Contact:
Roxanne Jensen, Ph.D.
[email protected]

National Institute on Drug Abuse (NIDA)

NIDA seeks to understand chatbot use across substance use disorder (SUD) trajectories and symptoms. Of particular interest are chatbots developed, adapted, or evaluated for SUD-related contexts and outcomes. 

 NIDA invites studies that explore, but are not limited to:

  • SUD-focused foundational model development and evaluation, including clinical validity.
  • SUD prevention, early intervention, treatment, and recovery support.
  • SUD-motivated craving, decision-making, and other cognitive/behavioral processes, including longitudinal examinations across the lifespan.
  • Clinical decision support and care integration (e.g. screening, referral, trial matching, and workflow efficiency).
  • SUD-specific ethical, regulatory, and policy considerations for chatbot development and deployment, including privacy, safety, and responsible use.
ICO Scientific Contact:
Diek Wheeler
[email protected]

National Institute of Dental and Craniofacial Research (NIDCR)

NIDCR is interested in research examining chatbot use in dental, oral, and craniofacial (DOC) health care and their implications for patients, providers, and care systems. Areas of interest include, but are not limited to:

  • Chatbot applications for symptom interpretation and decision support related to DOC health conditions
  • Impacts on healthcare-seeking behavior, timely access to dental care, and patient engagement
  • Coordination between chatbot-delivered guidance and professional dental care
  • Effects on accuracy and efficiency of professional and interprofessional DOC health-related clinical care delivery
  • Effects on patient-provider communication within and outside DOC settings

Studies may address chatbot use for managing chronic oral conditions or post-treatment monitoring. Of particular interest are design features that promote beneficial chatbot use while minimizing risks of misinformation, mistreatment, or delayed professional consultation for DOC health conditions.

IC may give special consideration to support meritorious applications in this topic area.
ICO Scientific Contact:
Lorena Baccaglini, DDS, MS, PhD
[email protected]

National Institute of Mental Health (NIMH)

NIMH is interested in research that advances understanding of benefits and harms of chatbot use for mental health and HIV across the lifespan.

Areas of interest include, but are not limited to:

  • Development and validation of methods that use chatbot data to enhance clinical practice and governance (e.g., to detect symptoms, stratify risk, enable timely and ongoing care and safety monitoring, support clinician decision making and measurement-based care) in mental health and HIV care settings
  • Use of experimental paradigms and computational frameworks to test neurocognitive mechanisms by which chatbot interactions impact behavior; psychopathology and symptom trajectories; and risk for or resilience to mental illness and HIV-related health outcomes
  • Evaluation of how chatbot users’ psychiatric, medical, and/or social histories interact with chatbot design features, and effects on functional and/or clinical outcomes
ICO Scientific Contact:
Elizabeth A. Lyons, PhD
[email protected]

National Library of Medicine (NLM)

NLM is interested to support research to advance rigorous, real‑world evidence on the benefits, risks, and mechanisms of chatbot use in health and wellbeing contexts. Research on conversational chatbots has the potential to substantially influence public health, healthcare delivery, and individual well‑being as these technologies become embedded in everyday decision‑making. NLM encourages research in chatbots benchmarking and evaluation of safety, reliability, security, generalizability, and misinformation risks and mitigation strategies. Research in these areas will help to identify factors critical for chatbot design ensuring safe, efficient and engaged use. This research is crucial for the development and adoption of trustworthy, reproducible, and rigorous chatbots as human‑centered tools to assist care providers, patients and individuals to interpret symptoms, manage chronic conditions, and make health decisions. 

ICO Scientific Contact:
Yanli Wang, PhD
[email protected]

Office of Behavioral and Social Sciences Research (OBSSR)

OBSSR is interested in behavioral and social science aspects of multidisciplinary research that identifies, measures, and explains the benefits and harms associated with chatbot use across a variety of populations, use cases, and settings. 

This office does not award grants. Applications must be relevant to the objectives of at least one of the participating Institutes or Centers listed in this topic.
Office of Data Science Strategy (ODSS)

ODSS aims to advance socio-technical solutions like best-practice resources, evidence-based guidelines, safeguards, and guardrails to optimize AI training data, algorithms, and models and support their rigorous assessment, validation, and adoption. Examples include:

  • Create standard metrics / frameworks to evaluate safety, efficacy, and real-world impact of chatbot use in health
  • Development and use of responsible AI models that are explainable, transparent, and FAIR
  • Build community partnerships to co-develop trustworthy health AI solutions; maintaining robust, responsible, and transparent standards
  • Integration of community standards and clinical informatics standards like HL7® Fast Healthcare Interoperability Resources (FHIR®), SMART on FHIR®, or United States Core Data for Interoperability (USCDI)
  • Programs to evaluate and guide responsible use of chatbots in research.



 

IC may give special consideration to support meritorious applications in this topic area.
This office does not award grants. Applications must be relevant to the objectives of at least one of the participating Institutes or Centers listed in this topic.
ICO Scientific Contact:
Christine Cutillo
[email protected]


For technical issues E-mail OER Webmaster