Next Article in Journal
The Influence of Social Media on Alcohol Consumption of Mothers of Children and Adolescents: A Scoping Review of the Literature
Previous Article in Journal
Professional Nurses’ Experiences of Student Nurses’ Absenteeism during Psychiatric Clinical Placement in Limpopo Province, South Africa: A Qualitative Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Why Pre-Registration of Research Must Be Taken More Seriously

1
School of Nursing and Midwifery, La Trobe University, Melbourne 3086, Australia
2
School of Nursing and Midwifery, Queen’s University Belfast, Belfast BT9 7BL, UK
3
Department of Rural Health, University of South Australia, Adelaide 5000, Australia
*
Author to whom correspondence should be addressed.
Nurs. Rep. 2023, 13(2), 679-681; https://doi.org/10.3390/nursrep13020060
Submission received: 10 March 2023 / Revised: 21 March 2023 / Accepted: 11 April 2023 / Published: 12 April 2023
The scientific method assumes that researchers use evidence generated from observational research to make predictions (hypotheses) that can be tested experimentally. In clinical disciplines, this is generally within the context of a randomised controlled trial. Sometimes predictions are correct, in which case replication of the study—ideally by a separate and independent research group—is required. If the prediction is replicated (ideally more than once), a case can be made for changing practice. Where the hypothesis is rejected, researchers may need to go back to their observational work to refine their theory/model and generate new predictions. The method is sound. It works. It is how clinical researchers prove that new treatment X is safe and effective. The method requires that researchers do not become tempted to “fiddle” their predictions—but they do [1]. Even in nursing, ranked the most ethical of professions 21 years in a row [2], there are numerous case examples of outcome manipulation. For example, Kao et al. (2018) [3] reported a randomised controlled trial that aimed to test the effectiveness of interactive cognitive motor training on gait and balance in 62 older adults. The authors concluded that the intervention was effective. However, improvement in gait and balance was not the prediction made by the authors. Originally the authors predicted that the intervention would improve cognitive function. These data were omitted from their reporting, presumably because there was no apparent effect on cognitive functioning (these data are actually reported in a separate paper [4]).
What would motivate a research group to misrepresent the findings of their research? The answer is quite straightforward. Research where the prediction is correct is significantly more likely to be published in a “prestigious” journal than research where the prediction is wrong. The Kao et al. (2018) [3] trial was, indeed, published in the International Journal of Nursing Studies, the leading journal in the discipline. Studies where a researcher predicts correctly are also more likely to garner more media attention and ultimately advance the careers of those involved. It is easy to see why, when they realised that their prediction was wrong, Kao et al. (2018) [3] succumbed to temptation and substituted cognition for gait. This is sometimes referred to as the I-knew-it-all-along effect, or hindsight bias, as in “we really knew that it was gait that was the outcome that would change and not cognition” [5]. At the end of the day, does it really matter if there is some minor “tinkering” with the order of outcomes at the end of a study?
Changing predictions does matter, essentially for “statistical reasons”—two words likely to stop you reading and jump to the conclusion of this Editorial. Please don’t. Null hypothesis significance testing—extensively used in clinical research—is based on the assumption that researchers have made a prediction that is being tested, a comparison of a null hypothesis where there is no relationship between variables and a hypothesis where there is [5]. If the research group can reject the null hypothesis (p < 0.05), it may be stated that it is plausible that the hypothesis is true. The more statistical tests that are performed, because, for example, there are more outcomes that have been measured, the more likely it is that there will be a p value that crosses the (magical, not magical) 0.05 threshold. There are also multiple different ways in which statistical tests can be undertaken that may also be more likely to generate the required p value. Changing predictions after the event and analysing data in multiple different ways matters because researchers will argue—with a substantially higher degree of confidence than is warranted—that an intervention is effective. Kao et al. (2018) [3] carry exactly this out in their trial, concluding that their intervention should be applied in clinical practice when, clearly, it should not.
The mechanism for ensuring that researchers stick with their predictions is well understood—pre-registration of research outcome and analysis plan with a registry, such as ANZCTN, clinicaltrials.gov, or the Open Science Framework. By pre-registering their work, researchers place their bets (predictions) with an independent, publicly accessible registry. This means that if there are any deviations between what was planned and what was reported, they can be spotted. The key word here is “can”. The problem is that registries are places where researchers lodge information about their predictions. There is no mechanism to determine if the registration entry and publication match. The burden for this work falls to the journal editor and peer reviewer. It is safe to say that: 1. journal editors and reviewers are already burdened and 2. neither group is aware of the need to check [6]. Consequently, even if a study is pre-registered, the chances of authors being caught if they flip the outcomes are remote. Where authors are caught and have been challenged—as was the case in the Kao et al. (2018) [3] example—the authors can easily deflect against the criticism (see the author explanation of why outcomes were switched [7]).
In clinical disciplines such as nursing, the research we undertake directly impacts patient care. Consequently, it is vitally important that researchers strictly adhere to the scientific method. We consider that researchers, reviewers, and journal editors need to take pre-registration far more seriously than is currently the case. To this end, Nursing Reports is mandating that authors include a statement about the registration status of their research when they submit their manuscripts for consideration for publication. Studies not properly registered will be required to include a statement in the limitations section of the manuscript indicating such and advising that readers may consequently infer a high risk of bias.

Author Contributions

Conceptualization, R.G.; writing-original draft preparation, R.G.; writing—review and editing, R.G., D.B., D.R.T. and M.J. All authors have read and agreed to the published version of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Souza, N.V.; Nicolini, A.C.; dos Reis, I.N.R.; Sendyk, D.I.; Cavagni, J.; Pannuti, C.M. Selective outcome reporting bias is highly prevalent in randomized clinical trials of nonsurgical periodontal therapy. J. Periodontal Res. 2023, 58, 1–11. [Google Scholar] [CrossRef] [PubMed]
  2. Nurses Retain Top Ethics Rating in U.S.; But Below 2020 High. (n.d.) Available online: https://news.gallup.com/poll/467804/nurses-retain-top-ethics-rating-below-2020-high.aspx (accessed on 26 February 2023).
  3. Kao, C.-C.; Chiu, H.-L.; Liu, D.; Chan, P.-T.; Tseng, I.-J.; Chen, R.; Niu, S.-F.; Chou, K.-R. Effect of interactive cognitive motor training on gait and balance among older adults: A randomized controlled trial. Int. J. Nurs. Stud. 2018, 82, 121–128, Erratum in Int. J. Nurs. Stud. 2020, 111, 103777. [Google Scholar] [CrossRef] [PubMed]
  4. Chan, P.-T.; Chang, W.-C.; Chiu, H.-L.; Kao, C.-C.; Liu, D.; Chu, H.; Chou, K.-R. Effect of interactive cognitive-motor training on eye-hand coordination and cognitive function in older adults. BMC Geriatr. 2019, 19, 27. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Nosek, B.A.; Ebersole, C.R.; DeHaven, A.C.; Mellor, D.T. The preregistration revolution. Proc. Natl. Acad. Sci. USA 2018, 115, 2600–2606. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Gray, R.; Badnapurkar, A.; Hassanein, E.; Thomas, D.; Barguir, L.; Baker, C.; Jones, M.; Bressington, D.; Brown, E.; Topping, A. Registration of randomized controlled trials in nursing journals. Res. Integr. Peer Rev. 2017, 2, 8. [Google Scholar] [CrossRef] [PubMed]
  7. Chou, K.-R.; Corrigendum to Kao, C.-C.; Chiu, H.-L.; Liu, D.; Chan, P.-T.; Tseng, I.-J.; Chen, R.; Niu, S.-F.; Chou, K.-R. Effect of interactive cognitive motor training on gait and balance among older adults: A randomized controlled trial. International Journal of Nursing Studies, 82, 121–128. Int. J. Nurs. Stud. 2020, 111, 103777. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gray, R.; Bressington, D.; Thompson, D.R.; Jones, M. Why Pre-Registration of Research Must Be Taken More Seriously. Nurs. Rep. 2023, 13, 679-681. https://doi.org/10.3390/nursrep13020060

AMA Style

Gray R, Bressington D, Thompson DR, Jones M. Why Pre-Registration of Research Must Be Taken More Seriously. Nursing Reports. 2023; 13(2):679-681. https://doi.org/10.3390/nursrep13020060

Chicago/Turabian Style

Gray, Richard, Daniel Bressington, David R. Thompson, and Martin Jones. 2023. "Why Pre-Registration of Research Must Be Taken More Seriously" Nursing Reports 13, no. 2: 679-681. https://doi.org/10.3390/nursrep13020060

Article Metrics

Back to TopTop