Conference Room 3087, Rockledge Center 2, Rockville, MD
Table of Contents
Update on Meetings
Clinical Research
Small Business Innovation Research
Rating of Grant Applications (RGA)
DRG/IC Review by Mechanism (DRG)
Integration of Neuroscience Reviews
Strategies for Interim Support and Review
New Agenda items for Next Meeting
Summary and Conclusions
Dr. Baldwin welcomed the members to the second meeting of the Peer Review
Oversight Group (PROG) and gave a general overview of the agenda. She reflected
on the status of the NIH as compared to one year ago when the staff and the
scientific community were coping with a furlough, a suspended cost-management
plan, and a one billion dollar backlog in the making of awards. This year the
NIH feels fortunate to have a budget with a 6.9% increase and a stable work
environment. After the minutes of the past meeting were approved, Dr. Baldwin
provided updates on some relevant issues of which the members should be aware,
although the PROG will not be immediately addressing these issues.
Update on NIH Meetings
Clinical Research: Dr. Baldwin indicated that, while the
recommendations of the Panel on Clinical Research are not within the purview of
PROG, the review of clinical applications is an issue which they may one day
discuss; Dr. Baldwin indicated that she felt it appropriate for the Division of
Research Grants (DRG) to deal first with this issue, and Dr. Eleanor Ehrenfeld,
Director Designee of the DRG, indicated that they are already at work on this.
The related issue of low-volume areas of research (which applies to some
clinical areas) is something that will be addressed by the PROG in the future.
There was some discussion of the definition of clinical research, and Dr. Seto
indicated that awards coded as clinical research can be further defined as
patient oriented research, epidemiologic and prevention research, behavioral
studies, clinical trials, mechanisms of human disease, health services and
outcomes research. Dr. Seto noted that this coding is currently underway for
all competing (Type 1 and Type 2 ) awards; the Clinical Panel recommended that
the Office of Extramural Research continue the coding activity and provide
yearly updates. Dr. Baldwin indicated that the interim report of the clinical
panel is under preparation now and she encouraged the members to communicate
with Dr. Varmus or the Chair of the Clinical Panel, Dr. David Nathan, if they
had additional comments.
Go back to the top
Small Business Innovation Research: Dr. Baldwin indicated that the
Small Business Innovation Research (SBIR) and Small Business Technology Transfer
(STTR) programs are specifically designed for research in small businesses and
are expected to lead to marketable products. She announced that there will be a
special workshop at NIH on January 22; this meeting will involve various members
of the scientific community, including small business representatives, and will
address congressional conference report language which raises questions about
the scoring of SBIR applications in comparison to investigator initiated
research project grants (R01s). The scoring patterns and conventions are
different, making such comparisons tenuous. There is sometimes unevenness in
paylines across Institutes and Centers (ICs) because the number of submissions
may vary. A new management procedure has been instituted whereby any SBIR or
STTR application with a score greater than 250 must be cleared through the
office of the Deputy Director for Extramural Research (DDER); Dr. Baldwin
commented that she has been impressed by the rationales provided with the
requests to pay some of these applications. In addition, ICs have been engaging
in an inter-Institute sharing to ensure that applications which score well can
be paid. The workshop will explore ways to continue to improve the quality of
SBIR and STTR applications. Harkening to issues of integration of review, to be
dealt with later in the meeting, Dr. Baldwin noted that all SBIR/ STTR
applications except those of the National Institute of Mental Health are
reviewed within the DRG, and that review panels include small business
representatives. Dr. McGowan expressed satisfaction with the review process,
and Dr. Leshner expressed enthusiasm for the cooperation and sharing across ICs
regarding the funding of these applications. Dr. Baldwin noted that, while many
government agencies have SBIR/ STTR programs, the Department of Health and Human
Services has one of the highest rates of commercialization, which is a major
goal of the program. There was some discussion of the use of government
research funds for this purpose, and it was clarified that funding is not the
purview of PROG; rather, the PROG should address such issues as whether the
review process functions optimally. Dr. Braciale asked for information on how
the review panels are constructed and members are chosen, and Dr. Kreek
commented that the review of the two Phases might be examined, since Phase I is
a relatively small portion of the overall funds for a project and the criteria
for Phase II might be the most important issue to consider. Dr. Baldwin
indicated that she will share with the PROG the report of the January SBIR
meeting, which is open to the public.
Go back to the top
Rating of Grant Applications (RGA): Deliberations and Decisions
At this (November) meeting, a major topic was the Rating of Grant
Applications (RGA). At the previous meeting (July, 1996), PROG members had
discussed the recommendations in the RGA report, and considered information
which had been obtained from the scientific community through a variety of
channels. At that time, the PROG decided to table several of the ten
recommendations in the RGA report and to focus primarily on whether to use
explicit criteria to structure the review, if so what these criteria should be,
whether to score/rate the individual criteria, and whether to retain reviewer
assignment of a global score or derive an overall score from criterion
subscores. Pilots were recommended.
Dr. Baldwin indicated that the purpose of using criteria was to obtain
clearer information to assist program staff in making difficult funding
decisions. She used the example of two appcliatons with tied scores, one that is
scientifically exciting but techically flawed and one that is technically
excellent but only moderately exciting. A special award such as a Shannon Award
might be used to provide some funding for the first, if in fact program staff
have sufficient information to make this determination. In the latter case, the
review might more clearly communicate to the applicant about the low enthusiasm
for the work being proposed. In order to determine whether in fact the use of
criteria would assist in clarifying the information provided by reviewers,
pilots were undertaken by both the DRG and the National Institute of Allergy and
Infectious Diseases (NIAID).
The NIAID pilot involved three review groups which reviewed a variety of
mechanisms (K02, K08, R03, R13, R18, P01), and used the three criteria as
originally stated in the RGA report: Impact, Approach, and Feasibility. Dr.
Hortencia Hornbeak of NIAID reported that reviewers indicated, via
questionnaires, that they felt in general that the use of criteria resulted in
improved focus and improved information for applicants, and that they would want
their own applications reviewed using these criteria. The majority of
respondents indicated that preparation time and discussion time were generally
judged to be the same or no longer with the use of criteria, and that they found
the criteria easy to use. The majority of reviewers did not favor having a
separate criterion for Innovation/ Creativity, nor did they consider it feasible
to derive the overall score for applications from a standard algorithm using
individual criterion subscores, although they did provide subscores.
The DRG pilot used eight study sections, including the basic, clinical and
behavioral sciences, which were divided into two groups of four; one group was
given a set of three criteria (Impact, Feasbility, and Investigator/
Environment) and the other was given a set of four criteria (the same three plus
Innovation/ Creativity). Only R01 and R29 applications were included in the
pilot. All reviewers were asked to structure written critiques and discussion
by criteria and to assign a global score as usual. Each of the applications for
which a reviewer was to write a critique was randomly assigned to one of three
scoring conditions: numeric (same as overall rating scale), alphabetic, or no
score for individual criteria. This was an exercise only; subscores were not
used or reported as part of the review, but this approach allowed each reviewer
to experience each condition. Evaluation included a debriefing by a member of
the Office of Extramural Research senior staff after each study section meeting,
with a DRG senior staff member observing but with no program or review staff
present. Reviewers and program staff were asked to complete questionnaires.
The results of the debriefings and early returns of reviewer questionnaires
were presented. These DRG results closely mirrored those of the NIAID pilot.
The results indicated that the reviewers generally felt that structuring the
written critiques and review meeting discussion by criteria was helpful in
giving better focus and balance to the review, in decreasing the tendency to
overfocus on technique or technical details, and that this process might in fact
offer a more fair review. Reviewers were nearly unanimous in their opposition to
scoring by individual criteria, and had lower enthusiasm for alphabetic than for
numeric rating. They favored having reviewers assign only an overall global
score. On the individual criteria, they felt that more than three or four would
have deleterious effects and might fragment the review process; they opposed
having Innovation/ Creativity as a separate criterion to be addressed for all
applications, and indicated that if anything it should be a bonus category.
Reviewers indicated that Innovation/ Creativity would be a difficult criterion
for an investigator to explicitly address, and that they would not want to have
to do so themselves in writing applications. However, they were enthusiastic
about having their own applications reviewed using the explicit criteria of
Impact, Feasibility, and Investigator/ Environment. They indicated a desire to
see research productivity highlighted within the Investigator/Environment
criterion, and felt that Innovation/ Creativity would most appropriately be
addressed as part of Impact. They requested clearer instructions and recommended
that the chair of each study section be specifically trained in the process of
structuring the discussion by criteria; some groups attributed their ease in the
use of criteria for discussion to a well-prepared and organized chair. Concerns
were expressed about how to address past activities, either progress on a
previous grant (for competing continuation applications) or responses to
previous critiques (for ammended applications).
There was discussion of the pilot results and members' sentiments regarding
the use of criteria; indications seemed to be leading in the general direction
of using criteria to structure the written critique and the review discussion.
A firm decision on this issue will be made in January or February 1997 based on
additional information to be obtained from program staff.
The criteria would likely include at least Impact, Feasibility, and
Investigator/Environment; the decision as to whether Innovation/Creativity would
be best dealt with as a separate criterion is under continuing consideration.
There was general agreement on the importance of fostering innovation and
creativity in research, and considerable discussion on how best to ensure that
such applications are submitted, receive appropriate reviews, and are supported
by the Institutes and Centers. Regardless of whether a separate review criterion
is adopted, the consideration of Innovation/ Creativity in review and in
research in general will continue to be important for the PROG and for the NIH.
The three generally accepted criteria are stated below with their working
definitions, which would be accompanied by some guidelines or
suggestions/examples of what they might encompass:
IMPACT: The extent to which, if successfully carried out, the project
will make an original and important contribution.
FEASIBILITY: The extent to which the conceptual framework, design,
methods, and analyses are adequately developed, well integrated, and appropriate
to the aims of the project.
INVESTIGATOR/ ENVIRONMENT: The extent to which the investigators,
available resources, institutional commitment, and any other unique features
will contribute to the success of the proposed research.
It was emphasized at the meeting that it is important for the scientific
community to understand that the use of these criteria does not represent a
fundamental change in the PHS 398 instructions, nor does it add criteria to the
areas that reviewers are instructed to use in considering the scientific merit
of applications during the review process. Rather, this effort represents a
reformatting or reorganization of the presentation of the information, in the
hope that it will result in more informative and perhaps more balanced
critiques. This should be an advantage to those who rely on this information,
both program staff in the Institutes and Centers and research investigators.
The working definition for Innovation/ Creativity as a separate criterion is
as follows:
INNOVATION/CREATIVITY: The extent to which the project employs novel
concepts, approaches or methodology.
There will be continued deliberation on how much and what explanation to
provide as elucidation of these criteria. A working group was formed to gather
information on the views of NIH program staff regarding the use of a specific
criteria for creativity, which will be chaired by Dr. Claude Lenfant, Director
of the National Heart Lung and Blood Institute and will include Dr. Norman
Anderson, Director of the Office of Behavioral and Social Sciences Research and
Dr. Marvin Cassman, Director of the National Institute of General Medical
Sciences.
It was generally recommended that there be no changes in the basic numerical
scoring system at this time. There was enthusiasm for having reviewers continue
to assign a global score to each application and that practice will continue.
There was generally low enthusiasm for the idea of assigning scores, whether
numeric or alphabetic, to the individual criteria or deriving an overall score
using such subscores, and those practices will not be adopted, but will be
discussed further at the February PROG meeting. The issue of the rating scale
itself has been tabled for future discussion; for the present time, the 1.0-5.0
rating scale, with 1 as the best possible score, will be retained.
At the next PROG meeting (Feb. 13-14, 1997), members will discuss the issue
of how to evoke creative, innovative projects, the possible use of a separate
criterion for creativity, and the wording of any further explanation of the
criteria to be used.
Go back to the top
Dr. Belinda Seto gave an overview of research grant applications at NIH,
indicating the overall number of applications reviewed, and providing some
details on site of review (whether in the DRG or the ICs), and grant mechanisms
(new versus continuation, original versus amended applications, etc.) In 1994
and 1995, the NIH reviewed over 41,000 applications. In 1996, there was a
decrease to 37,858, but Dr. Seto cautioned against overinterpreting this bit of
information, since 1997 data already show more than 17,000 applications and
since 1996 was an atypical year in many respects. Requests for Applications
(RFAs) in the period from 1994 through 1996 declined by about 55%, dropping from
13% of all applications in 1994 to approximately 8% of all applications in 1996;
this is at least in part due to some Institutes having made the decision to use
more Program Announcements than RFAs. Another way to view this decrease is in
terms of unsolicited applications (those which are investigator-initiated rather
than in response to RFAs); nearly 80% of all applications fall into this
category.
As context for site of review, Dr. Seto presented a view of the relative
size of the funding Institutes and Centers at NIH, as indicated by size of
appropriation budget. She then indicated that the review load does not
necessarily map to the size of the IC. In actual numbers of applications
reviewed in the ICs, there was a decline in applications reviewed in National
Institute on Alcohol Abuse and Alcoholism (NIAAA) from 470 to 171 following the
integration of review of the toxicology and prevention and control applications
with the DRG. The National Institute of Mental Health (NIMH), on the other
hand, had a review load of 2,200 applications in 1996, the highest of any single
ICD other than the DRG. Data for the ICs showed that they review a diverse
groups of mechanisms.
Overall, 84% of investigator initiated R01 grant applications were reviewed
in DRG; together the NIAAA, the National Insitute on Drug Abuse (NIDA), and NIMH
performed about 11% of review of R01s, while the rest of the ICs collectively
reviewed approximatly 5%. The majority of fellowships and First Independent
Research Support and Transition (FIRST or R29) applications are reviewed in the
DRG; generally training applications and large grant
applications (program project grants, centers, etc.) are reviewed in the
ICs. The majority of unsolicited applications are reviewed in DRG.
In examining the characteristics of these grant applications, it was noted
that more than three fourths of applications were new while about 23% were
competing continuation applications, and the majority of these continuations
were in their first renewal. Overall, 69% of applications were original while
about 30% were amended, and the majority of amended applications were in the
first amendment (A1). Supplemental applications represented only 1% of all
applications. It was noted that the policy limiting the number of amendments to
two went into effect for the October 1996 receipt date, so these data do not
reflect the impact of that policy change.
Go back to the top
In October 1, 1992, Public Law 102-321 (the ADAMHA Reorganization Act) was
implemented, transferring the three research institutes of the Alcohol, Drug
Abuse and Mental Health Administration to the NIH. At the last meeting, the
success of the NIAAA/ DRG Integration effort which resulted in the reformulation
of five study sections was heard by the PROG. The effort involved various other
Institutes and Centers (ICs) in addition to NIAAA and DRG, and provided a model
of a process of combining applications within a scientific area and developing
study sections to serve those areas. At the last meeting, it was suggested
that, given the large numbers of applications, areas of science, and ICs that
would be involved in the integration of NIDA and NIMH's review with DRG, perhaps
taking a single area of science would be a reasonable first approach. The areas
of basic neuroscience and biopsychology were suggested. Since the last meeting,
there has been a great deal of activity on the neuroscience integration.
Dr. Baldwin introduced Dr. Richard Nakamura, who offered regrets from NIMH
Director Dr. Steve Hyman, who was unable to address the group directly due to a
conflict of meetings. Dr. Nakamura described the Directors' Working Group on
Neuroscience Integration, which includes Directors of NIMH, NIDA, the National
Institute of Neurological Diseases and Stroke (NINDS), the National Institute of
Child Health and Human Development (NICHD), the National Institute on Aging
(NIA), and the DRG. He presented the guiding principles, developed by this group
and subsequently refined into operating principles by the staff level working
group, chaired by Dr. Elliot Postow of the DRG. Dr. Nakamura indicated that IC
Directors see this as a golden opportunity to organize the review of
neuroscience, and shared Dr. Hyman's enthusiasm for the impressive cooperation
and collaboration that this effort has seen to date. This was echoed by Dr.
Leshner, Director of NIDA and member of PROG; Dr. Leshner indicated that this
effort could eventually provide a model for future efforts. In response to a
question, Dr. Leshner indicated that program projects were being considered. He
further indicated, when asked about a timetable for the integration of
behavioral and social sciences, that more ICs will be involved in that effort,
which will be addressed upon completion of at least some additional steps in the
integration of basic neuroscience applications.
Dr. Elliot Postow of DRG gave a progress report of the staff working group
on Neuroscience Integration. He presented their operating principles, which
emphasize flexibility and the importance of emphasizing scientific focus and
breadth of perspective balanced by depth of expertise in reviewers. Dr. Postow
discussed the timeline for the activities outlined, which began with the
announcement of the effort to the scientific community on November 18, 1996, at
the annual meeting of the Society for Neuroscience. The goal is to have newly
formed neuroscience study sections reviewing applications in June/ July 1998 for
fall 1998 advisory councils, for FY 1999 funding. The group intends to present a
plan to PROG at the spring 1997 meeting. Issues to be addressed include the
relation between grant mechanism and review venue, the interface between
neuroscience and the behavioral sciences, and the impact on existing study
sections of moving grant applications to the new neuroscience study sections.
Dr. Postow emphasized that information is now being shared with the scientific
community, through announcements on the IC homepages, and that a special e-mail
address has been established to receive comments: NEURO@DRGPO.NRG.NIH.GOV
Discussion by PROG members included issues such as review of a mixture of
mechanisms in study sections. It was commented that there are reasons for and
against the issue and that it cannot be resolved without additional information.
Dr. Postow indicated that the issue has not yet been discussed by the working
groups, but is among the issues to be addressed. There were questions about
other ICs which will need to be included eventually in the neuroscience effort;
Dr. Leshner indicated that the ICs are aware of the current effort and that all
relevant ICs will be included. Several members indicated that the timeline may
be too slow and that they would like to see parallel efforts initiated before
the completion of the neuroscience integration. It was suggested that perhaps
AIDS was as an area that could be initiated on a parallel track for integration.
Dr. Donna Dean of DRG indicated that there is some activity in the area of
behavioral sciences occurring in the context of some activities initiated by the
Office of Research on Women's Health, and that Dr. Norman Anderson and several
IC contacts including NIDA, NIAAA, and NIMH are involved.
Given the progress to date, there was a discussion of what role PROG would
play in the integration effort. The group agreed that PROG should step back and
wait for the next progress report. However, they also agreed that the issue of
the review of different mechanisms was one which transcends neuroscience
integration and falls within their purview. Dr. Yamamoto added that this is also
an issue being considered in the DRG Advisory Committee. Dr. Leshner indicated
that NIDA already reviews multiple mechanisms within the same study sections and
offered their situation as an "experimental laboratory" for
consideration. The PROG was noted to be a good venue to discuss the underlying
issues, and it was suggested that the group may want to analyze the experiences
of DRG, NIDA, and perhaps NIMH in their reviews of single vs multiple mechanisms
within study sections.
The issue of low-volume areas of science and whether there is a critical
mass of applications needed for the most fair, thorough review was also raised;
it was noted that size may not be the only critical feature involved. Dr. Demsey
cited the example of the review of urology applications, which are being grouped
in Special Emphasis Panels in DRG; this solution seems to have resulted in fair
and thorough reviews, has satisfied the scientific community, and as a by
product has relieved some workload problems.
Go back to the top
Dr. Baldwin announced that the NIH is examining strategies for interim
funding, and in compliance with congressional report language (Report 104-659,
to accompany HR 3755), a meeting will be held at NIH on Dec. 4, 1996, to discuss
the various approaches Institutes and Centers (ICs) are taking. Dr. Baldwin
pointed out that, especially when funding lines are tight, the time required for
revision and resubmission can cause loss of research staff or disruption of
ongoing projects. One way that ICs deal with this situation currently is through
use of interim funding procedures which NIH has always had, such as
administrative supplements. There is, of course, the dilemma of spending funds
on administrative supplements vs new grants; however, costs of administrative
supplements are typically deducted from the grant if/when it is finally awarded
for a competing continuation. If the competing continuation is not funded, then
the interim or "bridge" award is actually a phase-out award. Based on
recent widespread concern about this issue, some ICs are attempting to find new
strategies; it is not clear that there is any single best approach.
The National Institute of General Medical Sciences (NIGMS) has developed a
strategy in which they have set aside a certain amount, and will use that money
to fund selected projects within 10 percentile points of the payline. Dr.
Baldwin explained that NIGMS is making use of an NIH waiver of full Facilities
and Administration costs for this period of interim support. There was some
discussion among PROG members about possible sliding scale funding. Dr. McGowan
discussed programmatic reductions and reinstatements of funds, and reiterated
the importance of peer review in guiding decisions. He explained that the NIAID
sometimes converts applications that cannot be fully funded to R03s (small
grants) with set dollar limits. Dr. Baldwin also noted that Shannon awards are
another way to provide some start-up funds. She pointed out that while there is
no "sliding scale" per se, there are multiple approaches to dealing
with this issue, which ICs tailor to the needs of specific projects or
investigators. There was some discussion about the role of program staff, and it
was generally agreed that study section members should be better educated about
the entire NIH process including how funding decisions are made.
The National Cancer Institute is selecting those applications that fall just
outside the payline and requesting 3-5 page responses to any concerns that were
identified during the peer review process, and considering them for select pay
on the basis of the 3-5 page responses. This strategy fits within existing
program staff practices, but formalizes the process somewhat. Dr. Baldwin noted
that none of these strategies is necessarily right for all ICs, and
that some of these are new strategies so that it is not clear whether or how
well they might work. Another effort that was mentioned was a DRG pilot on 3-5
page amended applications; it was noted that the DRG Advisory Committee views
this issue more as a program responsibility than a review issue. Dr. Baldwin
noted that all of these approaches relate to the time period from submission to
award, which seems to be the real problem and which is one that NIH reinvention
efforts are targeting. Dr. Yamamoto responded that the DRG Advisory Committee is
forming a working group to look at steps in the review process. Dr. McGowan
discussed an NIAID pilot to expedite the review to award process using
self-referral, electronically enhanced review, and expedited council review.
The timetable for the entire process, from receipt to award, is key to
extramural reinvention activities.
Dr. Sue Shafer, Deputy Director of the National Institute of General Medical
Sciences (NIGMS) presented a pilot in which peer review scores are rounded off.
She explained that the purpose is to minimize the differences in scores,
creating more tied scores, and thus giving greater latitude to program staff in
terms of program priorities. Program staff are now looking at a broader range of
applications in making funding decisions, and Dr. Shafer reported that this
pilot is to be continued for another year. In response to questions about the
differences that might occur if rounding were done after percentiling rather
than before (as is now being done) Dr. James Onken of NIGMS noted that there was
generally no more than a six percentile point difference in scores.
Go back to the top
Dr. Baldwin noted that at the last meeting there were requests for an update
on NIH reinvention activities. Since the issue also surfaced more than once
during this meeting, she asked members if they would like to have a complete
presentation on the topic at the February meeting. Members agreed.
In addition, regardless of whether Innovation/ Creativity is adopted as a
separate review criterion, it is clear that this is an issue that requires
further attention. Dr. Baldwin suggested that we attempt to find ways to shed
light on the conventional wisdom that peer review is inherently conservative,
that creativity is not rewarded, that it can be directly cultivated, and that if
we review for creativity we will get more creativity, in order to determine
whether in fact these are sound premises. She asked the members to suggest ways
to obtain data that would address this issue so that it can be discussed at the
next meeting.
Another issue that was raised, and for which data were requested, is whether
the scoring metric is stable across an entire review meeting. It is not clear
that this is a problem, but it was agreed that some data could shed light on the
issue. Again, Dr. Baldwin asked the members to forward any ideas or suggestions
to her before the next meeting, in preparation for a discussion.
Another issue to be discussed is whether or not it is advisable to review
various mechanisms within a single scientific peer review panel.
Revision of the PHS 398 application form was also suggested as a tentative
topic. Several members had ideas for revision and suggested that input from the
scientific community be obtained prior to the next major revision. Dr. Baldwin
pointed out that revisions of the form are on a fixed timetable, and that the
comment period for the current revision is coming soon. She suggested that PROG
members send her comments and suggestion. The issue will be revisited at the
next meeting.
Go back to the top
Dr. Baldwin updated the members on NIH meetings being held or planned: the
Director's Panel on Clincal Research which met on November 5, 1996, and a
meeting to discuss ways to strengthen the Small Business Innovation Research
program, scheduled for January 22, 1997.
The Rating of Grant Applications was thoroughly discussed on the first day
of the meeting, and revisited for decisions on the second day. Information was
presented on pilots using explicit criteria to structure review in the NIAID and
the DRG; reviewers felt that the structuring of written critiques and review
meeting discussion using criteria was valuable in increasing the focus of the
review and giving it better balance, discouraging the tendency to overfocus on
technique. They were strongly opposed to scoring the individual criteria either
using a numeric or alphabetic rating scale, and favored assigning only a global
score for each application. NIAID reviewrs favored using three criteria (Impact,
Feasibility, and Approach); DRG reviewers favored using the three criteria of
Impact, Feasibility, and Investigator/ Environment. None of the reviewers
favored using Innovation/ Creativity as a separate criterion. The PROG in their
final discussion seemed to generally agree that criteria should be used to
structure the written critiques and discussion, but that no changes should be
made in the basic numerical scoring system at this time: global scores should be
retained and individual scores for criteria probably should not be assigned.
While all of the PROG members agreed on the importance of creativity, they did
not reach a decision as to whether Innovation/ Creativity should be adopted as a
separate criterion; a working group was formed, chaired by Dr. Claude Lenfant,
along with Dr. Anderson and Dr. Cassman, and including representatives from the
clinical, biological and behavioral sciences, to obtain information on the views
of NIH program staff regarding the potential usefulness of this as a review
criterion. This will also allow the opportunity to finalize the recommendation
on scoring individual criteria.
Information was presented on numbers of applications reviewed in the DRG and
in the various ICs, as background for the discussion of which mechanisms are
reviewed in those settings. There was some discussion about the possible value
or problem of reviewing mutliple mechansims within a single scientific review
group, as there are examples of both approaches across the NIH and within the
DRG. This was noted as an appropriate topic for the next meeting.
Progress to date on the integration of review of the NIDA and NIMH with the
DRG was reported. This effort has been initated by directors and staff from
five Institutes (NIMH, NIDA, NINDS, NICHD, and NIA) who are engaged in the
reorganization of the review of neuroscience applications. Guiding principals
include flexibilty and the importance of the scientific expertise required;
scientific review groups will be formed de novo to accomodate the scientific
areas and applications. Input from the scientific community is now being
solicited through an e-mail address (NEURO@DRGPO.NRG.NIH.GOV) and the effort was
announced at the 1996 meeting of the Society for Neuroscience. A tentative
timeline was presented which would result in newly formed groups conducting
review for the funding cycle beginning in FY 1998. The PROG members expressed
strong enthusiasm for parallel efforts in other areas of science being
implemented on a staggered schedule rather than waiting for this initial phase
to be completed.
An update was provided of the NIH strategies currently being evaluated for
interim support, aimed at minimizing disruption that might be caused by lapses
in funding during the application revision process. This was more of an
educational presentation than an issue requiring PROG attention, but one which
should serve as important background for PROG members.
Agenda items for the next meeting include an update on NIH Revinvention
activities and discussions of the following: data or additional information on
the issue of how creativity might best be fostered in the biomedical research
arena; information on the views of NIH program staff on use of Innovation/
Creativity as a separate review criterion; stability of the scoring metric
across review meetings; review of mutliple mechanisms by a single scientific
review group; and possible revisions of the PHS 398 form.
I hereby certify that, to the best of my knowledge, the foregoing minutes
are accurate and complete.
Peggy McCardle, Ph.D., MPH, Executive Secretary
Peer Review Oversight Group
Wendy Baldwin, Ph.D.
Deputy Director for Extramural Research
Go back to the top
Return to PROG Home Page
|