NIH held a joint workshop in June 2014 with the Nature Publishing Group and Science on the issue of reproducibility and rigor of research findings, with journal editors representing over 30 basic/preclinical science journals in which NIH-funded investigators have most often published. The workshop focused on identifying the common opportunities in the scientific publishing arena to enhance rigor and further support research that is reproducible, robust, and transparent.
The journal editors came to consensus on a set of principles to facilitate these goals, which a considerable number of journals have agreed to endorse. These principles are shown below.
A section outlining the journal’s policies for statistical analysis should be included in the Information for Authors, and the journal should have a mechanism to check the statistical accuracy of submissions.
Journals should have no limit or generous limits on the length of methods sections (including online options), while at the same time encouraging efficient and clear presentation to ensure a thorough examination by reviewers.
Journals should use a checklist during editorial processing to ensure the reporting of key methodological and analytical information to reviewers and readers. (A proposed set of key information is listed below).
Core set of standards for rigorous reporting of study design (Adapted from Landis et al.)
Include these reporting standards in Information for Authors or other public place. Require authors to fill out a checklist, ideally upon submission, to state where the required information is located in the manuscript.
Encourage the use of community-based standards (such as nomenclature standards and reporting standards like ARRIVE), where applicable.
Require that investigators report how often each experiment was performed and whether the results were substantiated by repetition under a range of conditions. Sufficient information about sample collection must be provided to distinguish between independent biological data points and technical replicates.
Require that statistics be fully reported in the paper, including the statistical test used, exact value of N, definition of center, dispersion and precision measures (e.g., mean, median, SD, SEM, confidence intervals)
Require authors to state whether the samples were randomized and specify method of randomization, at a minimum for all animal experiments.
Require authors to state whether experimenters were blind to group assignment and outcome assessment, at a minimum for all animal experiments.
- Sample-size estimation
Require authors to state whether an appropriate sample size was computed when the study was being designed and include the statistical method of computation. If no power analysis was used, include how the sample size was determined.
- Inclusion and exclusion criteria
Require authors to clearly state the criteria that were used for exclusion of any data or subjects. Include any similar experimental results that were omitted from the reporting for any reason, especially if the results do not support the main findings of the study. Describe any outcomes or conditions that were measured or used and are not reported in the results section.
Stipulate, at the minimum, that all datasets on which the conclusions of the paper rely must be made available upon request (where ethically appropriate) during consideration of the manuscript (by editors and reviewers) and upon reasonable request immediately upon publication.
Recommend deposition of datasets in public repositories, where available. Datasets in repositories should be bidirectionally linked to the published article in a way that ensures proper attribution of data production.
Encourage presentation of all other data values in machine readable format in the paper or its supplementary information. Require materials sharing after publication.
Encourage sharing of software and require at the minimum a statement in the manuscript describing if software is available and how it can be obtained.
Have a policy stating that if the journal publishes a paper, it assumes responsibility to consider publication of refutations of that paper, according to its usual standards of quality.
Image based data (image screening for manipulation, Western blots, for example)
Description of biological material with enough information to uniquely identify the reagents (for example unique accession number in repository), in particular for:
- antibodies: also report source, characteristics, dilutions and how they were validated
- cell lines: also report source, authentication and mycoplasma contamination status
- animals: also report source, species, strain, sex, age, husbandry, inbred and strain characteristics of transgenic animals
The signatories represent journals, associations, and societies that publish or edit preclinical biological research — an area of research that encompasses both exploratory studies and hypothesis-testing studies, with many different designs. The journals, associations, and societies listed below endorse the principles and guidelines with the aim of facilitating the interpretation and repetition of experiments as they have been conducted in the published study. These measures and principles do not obviate the need for replication and reproduction in subsequent investigations to establish the robustness of published results across multiple biological systems.
NIH encourages the guidelines to be adapted and expanded to fit the unique needs and challenges of specific research fields. Adapted guidelines, or guidelines that use the NIH principles and guidelines as a model, will be posted here as they become available.