Intended for healthcare professionals

Editor's Choice

A commitment to best practice in research methods and reporting

BMJ 2025; 389 doi: https://doi.org/10.1136/bmj.r1072 (Published 22 May 2025) Cite this as: BMJ 2025;389:r1072
  1. Kamran Abbasi, editor in chief
  1. The BMJ
  1. kabbasi{at}bmj.com

Social media platforms have accelerated and widened the debate on research. Any science comes with uncertainties, and medical journals tend to argue that they are seeking to advance the debate on any particular issue, since publishing scientific “truths” is rare. A broader discourse is welcome. At least, this is The BMJ’s position. Invariably, answers to some research questions and interpretations of the evidence base are more uncertain than others. However, too many emerging and established commentators on science believe that all research papers are equal, when the Animal Farm reality is that some research papers are more equal than others.

For example, when Robert F Kennedy was challenged on his views on vaccination and other matters of controversy during his congressional confirmation hearings, his debating style was to selectively quote single papers that he believed supported his arguments (doi:10.1136/bmj.r259).1 Quoting single, unreliable, or outdated studies to refute a rigorously evaluated body of evidence has become something of a social media and political phenomenon. A study from The BMJ was widely shared on social media spreading the notion that vitamin therapy, in the form of cod liver oil, for measles might be an alternative to vaccination (doi:10.1136/bmj.2.3745.708).2 However, the study was published in 1932. The author even concluded that, although there was some suggestion that measles treatment limited the severity of pulmonary complications, “the evidence submitted in this experiment scarcely amounts to a demonstration of the value of intensive vitamin therapy in measles.” There’s nothing new in this. Scientific debate has always featured these arguments of convenience or ignorance.

The loser in online scientific skirmishes is any sense that methods matter and that understanding the totality of the evidence base, or as much of it as you can access, is central to contextualising any single study. Where this behaviour leads is problematic. It can be used to overstate the benefit of interventions or to cast doubt where little doubt exists. Such positioning might benefit political, corporate, or academic agendas, but it ends up causing harm to people and the planet. The most glaring recent example is the impression created by the US government that we don’t know the relation between the MMR vaccine and autism, when many well conducted studies, including randomised controlled trials, have supported the vaccine’s safety (doi:10.1136/bmj.r259 doi:10.1136/bmj.r642 doi:10.1136/bmj.329.7467.642).1345

We can’t avoid uncertainty. That’s why explaining the risks and benefits of an intervention to patients is an important responsibility for doctors. And sometimes there’s a need to act when certainty around interpretation of the evidence base isn’t as high as we’d like it to be. Take the situation of a pandemic: a new pathogen, hospitals filled to overcapacity (doi:10.1136/bmj-2023-075613),6 excess death rates rising and predicted to soar further, and fresh challenges to public health and clinical practice with only emerging or preliminary evidence to guide us. What to do? Let the health system collapse, destroying the morale and capacity of health professionals, and do little or nothing while accepting hundreds of thousands of deaths? The pioneers of evidence based medicine were clear that it should not be used to prevent health professionals taking decisions and making judgments about what best to do, on the basis of what we know. Evidence based medicine is not cookbook medicine (doi:10.1136/bmj.312.7023.71).7 It is, they said, “about integrating individual clinical expertise and the best external evidence.”

The first step, however, is to quantify the degree of uncertainty. And this is where GRADE comes in, as a system and a series of tools to inform healthcare guidance. GRADE was explained in The BMJ in 2008 (doi:10.1136/bmj.39489.470347.AD)8 and has since become firmly established, grown in scope, and embraced complexity. Yet any system or tool needs to be simple in its operability for it to be useful, and this is one of the rationales that led Gordon Guyatt, an originator of GRADE, and others to create Core GRADE. The idea is to hone in on the essentials of GRADE and move it on to embrace issues that have become prominent since GRADE was launched (doi:10.1136/bmj-2024-081902 doi:10.1136/bmj-2024-081903).910

Of particular relevance to medical journals is how to quantify uncertainty when there’s evidence of publication bias: it might be a research field reliant on small, underpowered clinical trials suggestive of marketing studies designed to support products created by pharma or other industries. It’s good to see that in both cases Core GRADE advises that the strength of clinical and public health recommendations should be downgraded (doi:10.1136/bmj-2024-083864).11

The BMJ is publishing a series of seven Core GRADE papers. In recent months we’ve also published an update to the CONSORT statement for reporting of randomised controlled trials (doi:10.1136/bmj-2024-081123 doi:10.1136/bmj-2024-081124),1213 an update to the SPIRIT statement for randomised controlled trials (doi:10.1136/bmj-2024-081477 doi:10.1136/bmj-2024-081660),1415 risk of bias tools (doi:10.1136/bmj-2024-081199 doi:10.1136/bmj-2024-079839),1617 and guidance on AI (doi:10.1136/bmj-2024-082505)18 and deep learning (doi:10.1136/bmj-2023-076703).19 This is just a small selection of our output on research methods and reporting,20 and we remain committed to improving the conduct and reporting of established and emerging study designs and technologies.

In this age, when the illusion is that a single, selectively reported—and maybe even misreported—study is king in social media timelines and political soundbites, it’s more essential than ever to champion best practice in research methods, including their reporting and interpretation. And I say that with absolute certainty.

References